<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Cogini</title><link href="https://www.cogini.com/" rel="alternate"/><link href="https://www.cogini.com/feeds/atom.xml" rel="self"/><id>https://www.cogini.com/</id><updated>2024-01-08T00:00:00+08:00</updated><entry><title>Building and Testing Elixir Containers with GitHub Actions</title><link href="https://www.cogini.com/blog/building-and-testing-elixir-containers-with-github-actions/" rel="alternate"/><published>2024-01-08T00:00:00+08:00</published><updated>2024-01-08T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2024-01-08:/blog/building-and-testing-elixir-containers-with-github-actions/</id><summary type="html">&lt;p&gt;Here are the slides for the presentation
&lt;a href="https://www.cogini.com/files/elixir-cicd-with-containers-github-actions.pdf"&gt;Building and Testing Elixir Containers with GitHub Actions&lt;/a&gt;
I gave to the Denver Elixir user's group.&lt;/p&gt;
&lt;p&gt;It covers testing and other practical concerns when implementing microservices in Elixir.&lt;/p&gt;
&lt;p&gt;Here is the example code:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/cogini/phoenix_container_example"&gt;phoenix_container_example&lt;/a&gt;:
CI/CD system based on containerized build and test …&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;</summary><content type="html">&lt;p&gt;Here are the slides for the presentation
&lt;a href="https://www.cogini.com/files/elixir-cicd-with-containers-github-actions.pdf"&gt;Building and Testing Elixir Containers with GitHub Actions&lt;/a&gt;
I gave to the Denver Elixir user's group.&lt;/p&gt;
&lt;p&gt;It covers testing and other practical concerns when implementing microservices in Elixir.&lt;/p&gt;
&lt;p&gt;Here is the example code:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/cogini/phoenix_container_example"&gt;phoenix_container_example&lt;/a&gt;:
CI/CD system based on containerized build and test running in GitHub Actions, deploying to AWS ECS using Terraform&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/cogini/absinthe_federation_example"&gt;absinthe_federation_example&lt;/a&gt;: federated GraphQL with Apollo Router&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content><category term="DevOps"/><category term="elixir"/><category term="github actions"/><category term="containers"/><category term="testing"/><category term="aws"/><category term="terraform"/><category term="presentations"/></entry><entry><title>Breaking up the monolith: building, testing, and deploying microservices</title><link href="https://www.cogini.com/blog/breaking-up-the-monolith-building-testing-and-deploying-microservices/" rel="alternate"/><published>2024-01-02T00:00:00+08:00</published><updated>2024-01-02T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2024-01-02:/blog/breaking-up-the-monolith-building-testing-and-deploying-microservices/</id><summary type="html">&lt;p&gt;I recently worked with a large e-commerce company to "break up the monolith".&lt;/p&gt;
&lt;p&gt;They have three main applications, a web front end in Elixir/Phoenix, a large
Ruby on Rails application used for internal processing, and a large Absinthe
GraphQL API that glues the pieces together and integrates with third …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently worked with a large e-commerce company to "break up the monolith".&lt;/p&gt;
&lt;p&gt;They have three main applications, a web front end in Elixir/Phoenix, a large
Ruby on Rails application used for internal processing, and a large Absinthe
GraphQL API that glues the pieces together and integrates with third party APIs.&lt;/p&gt;
&lt;p&gt;The primary motivation for the project was to improve reliability by decoupling
the back end application from the public website. With big applications, it's
easy for a change in one component to break another unrelated component.
Just upgrading a library can be dangerous, but is necessary to deal with
security vulnerabilities.&lt;/p&gt;
&lt;p&gt;Another goal was to make independent components that could be developed more
quickly. Finally, they wanted to improve security by compartmentalizing access
and making them easier to audit.&lt;/p&gt;
&lt;p&gt;We ended up with a plan to extract relatively-large components into
services such as "users", which handled user registration, login, and settings.
We needed to be able to incrementally update the system the system while keeping
compatibility and ability to roll back.
We used &lt;a href="https://www.apollographql.com/docs/federation/"&gt;federation&lt;/a&gt; to break up the
GraphQL backend into separate services while keeping the same public API.&lt;/p&gt;
&lt;p&gt;Some things that are "nice to have" in monolithic systems become critical
with microservices, e.g.:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Effective automated testing, including comprehensive tests which ensure that
  services can be updated without breaking clients&lt;/li&gt;
&lt;li&gt;Fast, easy, and reliable deployment&lt;/li&gt;
&lt;li&gt;Effective processes for development and QA that work with multiple components&lt;/li&gt;
&lt;li&gt;Observability to identify and debug problems which go across components&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some specific things we implemented:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Testing against the deployed OS image, allowing OS updates to be tested
  against the code, particularly important when security issues are
  identified in base images&lt;/li&gt;
&lt;li&gt;Static code analysis tools and security scanners, improving quality&lt;/li&gt;
&lt;li&gt;Test results integrated into the pull request UI, providing actionable
  feedback to developers&lt;/li&gt;
&lt;li&gt;Supporting development and QA across multiple services in local or review
  environments&lt;/li&gt;
&lt;li&gt;Improved build performance through better caching and parallel execution,
  reducing cost through better efficiency and improving developer experience by
  reducing time spent waiting when deploying&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some challenges we faced included:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Difficulty splitting up complex GraphQL schemas and untangling application
  dependencies&lt;/li&gt;
&lt;li&gt;Application configuration, particularly secret management&lt;/li&gt;
&lt;li&gt;Difficulty assigning code ownership&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Show me the code!&lt;/h2&gt;
&lt;p&gt;The rest of this post gives details and motivation for the architecture at a high
level. But first, here are some running examples that show how to do it:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/cogini/phoenix_container_example"&gt;phoenix_container_example&lt;/a&gt;
shows shows a CI/CD system based on containerized build and tests running in
GitHub Actions, deploying to AWS using Terraform.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cogini/phoenix_container_example"&gt;absinthe_federation_example&lt;/a&gt;
shows how to test federated GraphQL applications based on Apollo Router.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;See below for a description of how the code works.&lt;/p&gt;
&lt;h2&gt;Testing is critical&lt;/h2&gt;
&lt;p&gt;Testing is the most important part of microservices, particularly in a situation
where the monolith has become big enough that it's causing problems.&lt;/p&gt;
&lt;p&gt;We need to have a hierarchy of tests, going unit tests that are quick to run
but fake to integration tests that test the real running system:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Unit tests with synthetic data embedded in the code or external files&lt;/li&gt;
&lt;li&gt;Unit tests with data from a database&lt;/li&gt;
&lt;li&gt;Unit tests for an external service with mocked APIs, i.e., not actually
   communicating with the service&lt;/li&gt;
&lt;li&gt;Unit tests for an external service (e.g., Salesforce or Algolia)
   communicating with the service in a test environment&lt;/li&gt;
&lt;li&gt;External tests with data from a database or external service in a test
   environment&lt;/li&gt;
&lt;li&gt;External tests that combine data from multiple services, i.e., GraphQL
   federation.&lt;/li&gt;
&lt;li&gt;Health checks&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The core process is based on running automated tests in dev and CI, with
optional manual testing by QA. We can confidently deploy code if we
have good automated code coverage. We also need observability support to
identify regressions in production and debug them. Additional deeper checks may
be valuable after the code is released, e.g., load tests or security checks.&lt;/p&gt;
&lt;p&gt;Unit tests are written in Elixir or Ruby their unit test frameworks.
They start with synthetic data embedded in code or external files.
Subsequent tests cover higher-level APIs that pull data from a database or
communicate with an external service. We start by creating a mock version of
the external service which returns test data. Some services provide a test
sandbox or test mode for the production service, allowing us to perform
integration testing against the real service. We may also be able to run
services in containers for testing.&lt;/p&gt;
&lt;p&gt;Here, external/integration tests use Postman/Newman.
&lt;a href="https://www.postman.com/automated-testing/"&gt;Postman&lt;/a&gt; is an interactive UI for
creating and managing API tests, used by testers in development or QA environments.
It includes a headless runner
(&lt;a href="https://learning.postman.com/docs/running-collections/using-newman-cli/command-line-integration-with-newman/"&gt;Newman&lt;/a&gt;)
for CI.&lt;/p&gt;
&lt;p&gt;There is a trade-off between synthetic tests and integration tests. Synthetic
tests run quickly and reliably but may end up not being accurate, as they are
effectively only testing fake data and mocks. Integration tests exercise the
full end-to-end functionality, e.g., talking to actual external services. They
are more accurate but run slower and may be flaky when external services
are unavailable.&lt;/p&gt;
&lt;h4&gt;Mocks and fake data&lt;/h4&gt;
&lt;p&gt;Mocks are functions that have the same interface as a library or service but
return predefined data instead of calling the real service. They avoid
dependencies on external services when running tests and improve test speed.
Mock data may not reflect the actual behavior of the system, however. We may
end up testing our mocks, not reality.&lt;/p&gt;
&lt;p&gt;As we break up services, we need a solid test suite that validates
the external API for each service, providing a contract for consumers.
Whatever changes we make internally should not break other parts of the system.
We can collect queries running on the current system and use them as a
regression test to ensure that we are not breaking clients, even if they are
somehow invalid. For example, after upgrading Absinthe GraphQL, it may start
rejecting invalid queries that it accepted before.&lt;/p&gt;
&lt;p&gt;While useful, the mocking process can result in code and configuration
complexity. Where possible, we should avoid mocking in favor of using real
data.&lt;/p&gt;
&lt;p&gt;Using &lt;a href="https://github.com/elixir-tesla/tesla"&gt;Tesla&lt;/a&gt; for the HTTP client library
simplifies the mocking process. It includes a test “adapter” that removes the
need to change the server URL at runtime, something that causes configuration
problems.&lt;/p&gt;
&lt;p&gt;Seed data is also needed to provide a stable base for external API tests.
It provides a base of data that tests can expect to be there, e.g., users and
products. Other data may be created during the tests, e.g., creating an order
for a product.&lt;/p&gt;
&lt;p&gt;A larger but more comprehensive database snapshot is also useful for
dev and review environments. It is an anonymized subset of the production
database, with enough realistic data to support testing. Larger databases can
also be used for, e.g., overnight load testing in staging.&lt;/p&gt;
&lt;p&gt;One performance trick is to snapshot the schema and test data as SQL,
then build it into Postgres test container. This is much faster than
running lots of little database migrations.&lt;/p&gt;
&lt;h2&gt;Static code analysis tools&lt;/h2&gt;
&lt;p&gt;Static analysis tools perform quality, consistency, and security checks on
code. Manually written tests tend to exercise the normal execution code paths,
while consistency tools find problems with less common code paths.
Quality tools identify error-prone programming patterns, failure to handle error
cases, security issues, and the like.&lt;/p&gt;
&lt;p&gt;Examples for Elixir include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/rrrene/credo"&gt;Credo&lt;/a&gt; (code quality)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.erlang.org/doc/man/dialyzer.html"&gt;Dialyzer&lt;/a&gt; (type consistency
  checking)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/mirego/mix_audit"&gt;mix audit&lt;/a&gt; (security check for Elixir
  packages with known issues)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hexdocs.pm/mix/1.14.1/Mix.Tasks.Test.Coverage.html"&gt;Test coverage&lt;/a&gt;
  (checking what percentage of the code the tests exercise)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/nccgroup/sobelow"&gt;Sobelow&lt;/a&gt; (web security for Phoenix,
  e.g., cross-site scripting, SQL injection)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.aquasec.com/products/trivy/"&gt;Trivy&lt;/a&gt; (vulnerability scanner for
  code and operating system). Other similar tools include
  Grype, Gitleaks, Snyk, GitHub Advanced Security, and SonarQube&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/adobe/elixir-styler"&gt;Styler&lt;/a&gt; is a plugin for the Elixir
  formatter that fixes problems identified by Credo instead of complaining.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The results of static code analysis need to be surfaced to developers so they
can take action. Best is to integrate results into the developer’s editor,
showing errors inline. Next best is running in CI.&lt;/p&gt;
&lt;p&gt;Using a tool like &lt;a href="https://houndci.com/"&gt;Hound&lt;/a&gt;, we can integrate test results
into the PR process as comments. This avoids the "wall of text" that nobody
reads unless it actually breaks the build. Developers can first fix these
issues, allowing reviewers to focus on high-level concerns.&lt;/p&gt;
&lt;p&gt;Adding quality checks to an existing project can be challenging, as it
results in hundreds of issues. It can be tough getting them through a
traditional PR approval process. One solution is to snapshot the results
and only break on new problems.&lt;/p&gt;
&lt;h2&gt;Testing in containers&lt;/h2&gt;
&lt;p&gt;Many CI systems don't test the actual running system. They run tests in a
generic Linux container, effectively just checking out the code and running
&lt;code&gt;mix test&lt;/code&gt; the same as on a developer's machine.&lt;/p&gt;
&lt;p&gt;Instead, we should run tests in an environment that matches the target system as
closely as possible, allowing us to identify problems from library
incompatibilities or misconfiguration. This lets us safely upgrade the base
image, e.g., in response to security vulnerabilities, and test that the code
works with it.&lt;/p&gt;
&lt;p&gt;Security vulnerabilities represent a fundamentally different workflow from
development. In the normal process, we start with a PR, run it through a test
and review process, and then release it when it is ready. In the security process,
we start with the production code/released container, then run security scans which
identify newly-discovered vulnerabilities. Those  trigger a process of upgrading
libraries/OS releases and testing code against them. The review process for
security issues needs to be able to run quickly and safely, mostly based on
automated tests, as we are vulnerable to security problems until the update is
in production.&lt;/p&gt;
&lt;h2&gt;Example CI build and test process on GitHub Actions&lt;/h2&gt;
&lt;p&gt;The GitHub Actions &lt;a href="https://github.com/cogini/phoenix_container_example/blob/main/.github/workflows/ci.yml"&gt;build process&lt;/a&gt;
runs tests against containers, including unit tests, code quality checks,
external API tests, and security scans.&lt;/p&gt;
&lt;p&gt;&lt;img src="/images/github-actions-screenshot.png" alt="GitHub Actions CI for containerized testing" width="100%"/&gt;&lt;/p&gt;
&lt;p&gt;As part of CI, it builds two containers, test and prod. The test container has
the source code and potentially other test tools. The prod image only has the
minimal OS and application. The application is built using Erlang releases,
which contain only the code needed for the production application, reducing
size and attack surface.&lt;/p&gt;
&lt;p&gt;The CI process first builds the two containers in parallel, then runs tests
against them in parallel.&lt;/p&gt;
&lt;p&gt;It uses a containerized version of the database which is initialized using
migrations and seed data. Internal tests use a combination of ephemeral test
data and persistent seed data in the database. It also includes containerized
versions of other back end services such as Redis and Kafka.&lt;/p&gt;
&lt;p&gt;External tests make calls using Newman against the service's public GraphQL
API, returning results from persistent seed data in the database. These
integration tests have full fidelity to the ultimate environment. In order to
implement this, we need to write the API tests (or extract them from unit
tests) and build seed data to support them.&lt;/p&gt;
&lt;p&gt;For a microservices system, we can run the external tests against multiple
containers at once. The
&lt;a href="https://github.com/cogini/phoenix_container_example"&gt;absinthe_federation_example&lt;/a&gt;
tests bring up containers for
&lt;a href="https://www.apollographql.com/docs/router/"&gt;Apollo GraphQL Router&lt;/a&gt;,
the container for the newly-updated code, as well as release containers for
other services that are part of the same user scenario. We then run tests using
Newman against the Apollo Router container, which routes requests to one or
more service containers. Similarly, we can run tests for a website that calls a
back-end API. Or we can run a headless browser testing framework to test the
front end.&lt;/p&gt;
&lt;p&gt;When we run CI for a service, we test that new code for the service works well
in isolation and in combination with other services. Once these tests pass, we
create a new production image for the service. We push this image to the GitHub
Container Registry (GHCR) as part of testing. In the final step, we push to a
production AWS ECR repository. GHCR runs internally to GitHub, improving
build performance. As a result, tests always have access to the production
container image for each released service as well as the new release of the
code for the current service. This allows us to run integration tests against
all the services as a group.&lt;/p&gt;
&lt;p&gt;All of the above tests can run in parallel, so the total build and test time is
fast (about two minutes). Tests can also be parallelized by partitioning
the test run if they take a lot of time.&lt;/p&gt;
&lt;h2&gt;Security scanning&lt;/h2&gt;
&lt;p&gt;The build process runs security scanners in three phases:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;During testing, run unit tests, code quality tools, and security checks on the code.&lt;/p&gt;
&lt;p&gt;These run on the test image, which includes a checkout of the app source
code. We run tools such as Trivy to check for security problems on our code and
dependencies. It reads JavaScript package files to identify dependencies
with known vulnerabilities.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After building the prod container, run security checks on it.&lt;/p&gt;
&lt;p&gt;This container has only the minimum needed to run the app and has minimized
versions of final JavaScript code. It runs security checks again, this time
checking the OS image for known vulnerabilities and configuration problems,
e.g., world-writable directories.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After release, periodically run security checks to identify newly
    identified vulnerabilities.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Code-level security problems should break the build. It’s probably also the
best location to deal with vulnerable JavaScript libraries. These kinds of
issues could be found after release, as well, so it probably makes sense to run
most of this in the periodic job as well.&lt;/p&gt;
&lt;p&gt;Here, the scan outputs results in
&lt;a href="https://github.com/microsoft/sarif-tutorials/blob/main/docs/1-Introduction.md"&gt;SARIF&lt;/a&gt;
formats. It then uploads the results to GitHub, where they appear in the
Security tab. This requires a GitHub Advanced Security license, though it is
free for public open source. It is straightforward to use a similar mechanism
to run any other security scanning tools. Using command line tools avoids
pricing models that charge per developer.&lt;/p&gt;
&lt;h2&gt;Health checks&lt;/h2&gt;
&lt;p&gt;External API tests and scenario-based production health checks are quite similar.
In production, instead of simply checking for success or failure, we can run a
request against the production database and ensure that we get back the data
that we expect.&lt;/p&gt;
&lt;h2&gt;Developer experience&lt;/h2&gt;
&lt;p&gt;Splitting up services can improve the experience for developers, as they can
work on a smaller codebase with faster, more reliable tests and shorter
development cycles. They can make changes without worrying that they will break
another part of the code that they are not involved with. CI tests and static
code analysis make deployments more reliable.&lt;/p&gt;
&lt;p&gt;Developers need a quick feedback loop when developing code, so they need quick
tests that fail fast. We then run additional tests in CI that take more
time or rely on external services. These services may be unreliable and block
deployment, as we need to retry tests until they succeed.&lt;/p&gt;
&lt;p&gt;Fast builds also reduce the impact of outages. When your tests take 30 minutes
to run, the duration of outages with code fixes are all multiples of 30 minutes.&lt;/p&gt;
&lt;p&gt;In this application, the Ruby on Rails tests took 4 hours to run on a
developer's machine, so everyone relied on CI. With parallel execution, it
would take about 25 minutes to run there. Flaky tests were a big problem. They
would randomly fail, then succeed on the next run (or the next). We spent
effort to instrument the builds to identify flaky tests and prioritize fixing
them, improving developer experience and avoiding wasted time.&lt;/p&gt;
&lt;p&gt;The downside with microservices is that developers need more dependencies up
and running in order to work on a piece of code. They might need a dozen
services running to be able to test their changes. Containerized development
helps with that. The containerized build process means that standard, pre-built
images for each service are available in GitHub, so developers do not need to
build images locally.&lt;/p&gt;
&lt;p&gt;The biggest complaint about containerized development is performance.
Some &lt;a href="https://www.docker.com/blog/speed-boost-achievement-unlocked-on-docker-desktop-4-6-for-mac/"&gt;recent improvements to disk I/O&lt;/a&gt;
to Docker help. It’s should be possible to run a local database and develop
code locally while talking to containerized versions of other services.&lt;/p&gt;
&lt;p&gt;Using local Kubernetes for development makes it more consistent with the
production environment. Another option is to give each developer their own
equivalent environment in the cloud. They can then use one or more services
locally while connecting back to the development environment, e.g.,
with &lt;a href="https://www.telepresence.io/"&gt;https://www.telepresence.io&lt;/a&gt;.&lt;/p&gt;</content><category term="Programming"/><category term="microservices"/><category term="testing"/><category term="github actions"/></entry><entry><title>Jobs vs Events</title><link href="https://www.cogini.com/blog/jobs-vs-events/" rel="alternate"/><published>2024-01-01T00:00:00+08:00</published><updated>2024-01-01T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2024-01-01:/blog/jobs-vs-events/</id><summary type="html">&lt;p&gt;Kafka is popular as a backbone messaging service to connect services. You may
be used to using background job processing services like Resque in Ruby or Oban
in Elixir, and wonder whether you can replace them with Kafka. In practice,
both have their benefits.&lt;/p&gt;
&lt;p&gt;This post discusses issues that come …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Kafka is popular as a backbone messaging service to connect services. You may
be used to using background job processing services like Resque in Ruby or Oban
in Elixir, and wonder whether you can replace them with Kafka. In practice,
both have their benefits.&lt;/p&gt;
&lt;p&gt;This post discusses issues that come up when using Kafka and architectural
patterns that can help.&lt;/p&gt;
&lt;h2&gt;Kafka architecture&lt;/h2&gt;
&lt;p&gt;Kafka is fundamentally different from Resque and other messaging systems like
RabbitMQ. These systems have items that are added to queues, processed, and
removed. Kafka instead uses a "topic", which acts as a log or ledger.
Publishers add records to the end of the log, and they never go away. The Kafka
server assigns each record an ever-increasing integer "offset" which indicates
its position in the log. For practical reasons, we may purge old records from
the system, e.g., to save space, but the offset keeps increasing.&lt;/p&gt;
&lt;p&gt;Multiple independent applications can read from the same topic. Each
application keeps track of which records it has processed by remembering the
offset. One application might start at the beginning of the available records
and read each one by one, doing some work or analysis. Another might wait for
new records and perform some action.&lt;/p&gt;
&lt;p&gt;This architecture makes Kafka great as a permanent record of events, but it
doesn't specify what should be done with the records. Most importantly, it
doesn't cover error handling.&lt;/p&gt;
&lt;p&gt;For example, assume we have an application that looks for new items in the
product catalog and puts them into Elasticsearch. It reads a record from Kafka,
parses the data, creates a JSON document, and submits it to Elasticsearch for
indexing.&lt;/p&gt;
&lt;h3&gt;Permanent failures&lt;/h3&gt;
&lt;p&gt;If the app cannot parse a data record, then it needs to log the issue and
continue, otherwise, the bad record blocks further processing. The failure is
“permanent”, i.e., it's fundamental to the data.&lt;/p&gt;
&lt;p&gt;In practice, it may be that changing the code would allow the data to be
handled, and we could reprocess the failed records. We also need to be able to
debug problems with the system, figuring out where the data came from and how
often the issues are occurring. A common strategy is to write records to a
"Dead Letter Queue" (DLQ) which we can monitor, review, and reprocess.&lt;/p&gt;
&lt;p&gt;The big difference between normal processing and DLQ processing is error
handling. When processing normally, we read a record from Kafka, try to parse
it, and if it fails (or there is some other permanent error like data
validation), we put it in the DLQ.&lt;/p&gt;
&lt;p&gt;When processing the DLQ, we start at a particular offset in the DLQ and try to
parse records. If they succeed, we do the normal processing. Otherwise we just
move on to the next record. As we process the DLQ the same way as regular
records, doing it in the same application generally makes sense. We can just
logically "poke" the consumer and tell it to process the DLQ. When processing,
we may want to keep track of which records we have already processed and avoid
processing them again. This depends on the service, however, whether processing
is idempotent and whether processing records out of order matters.&lt;/p&gt;
&lt;p&gt;One of the most significant use cases is having a bug or version skew in the
consumer, causing mass failures in parsing input records. In this case, we can
update the code and then reprocess the DLQ, most of which would succeed. This
can result in huge numbers of messages on the DLQ. We need to monitor the
number of messages and the rate of errors, then ops in case of problems.&lt;/p&gt;
&lt;h3&gt;Transient failures&lt;/h3&gt;
&lt;p&gt;Temporary problems are more common in everyday processing. It might be that the
target system, e.g., Elasticsearch, is unavailable or overloaded. The
application needs to retry if it can't connect or rate-limit processing. A
naive sender can overwhelm a target system, "kicking it when it's down".&lt;/p&gt;
&lt;p&gt;Kakfa is generally silent about these kinds of error-handling strategies, as
each application has different requirements. The application or programming
language framework needs to make its own decisions. It needs to handle retry
logic, rate limiting, duplicate processing, and coordinate between multiple
processes working together.&lt;/p&gt;
&lt;h3&gt;Job frameworks&lt;/h3&gt;
&lt;p&gt;Job processing frameworks such as Resque focus almost entirely on the transient
processing of messages. They consider each job to be unique. They have
sophisticated functions to register the job, schedule it, retry it if it fails,
and troubleshoot the system.&lt;/p&gt;
&lt;p&gt;Resque is often used to deal with the fact that Ruby doesn't do concurrency
particularly well and that Rails processes are heavyweight, taking up a lot of
system resources. It is useful for performance to split processing into
interactive parts and asynchronous background processes.&lt;/p&gt;
&lt;p&gt;For example, in an e-commerce system, the user creates an order, and the
application responds immediately with "Thank you for your order". It then
triggers multiple background jobs, e.g., sending a confirmation email, running
anti-fraud checks, and starting fulfillment. Any one of these processes might fail
temporarily and be retried.&lt;/p&gt;
&lt;p&gt;Sometimes we do work in the background for practical reasons. A good example is
starting a report process which might take a long time to complete. If we
could reliably handle it immediately from the same process, we would. It
reduces system resources, however, to trigger a background job, returning
immediately and then notifying the user when the job is done. It may also provide
a better user experience.&lt;/p&gt;
&lt;h2&gt;Leveraging both&lt;/h2&gt;
&lt;p&gt;In practice, job handling systems are convenient to use and mature. We can
combine Kafka for "events" and Resque/Oban for "jobs". A Kafka consumer simply
reads records from a topic and creates job records, which it schedules for
processing. We still need to avoid overloading the job system, but otherwise,
the error-handling logic is relatively simple.&lt;/p&gt;
&lt;h2&gt;Access to data&lt;/h2&gt;
&lt;p&gt;A key question is how the application accesses the data it needs to process.&lt;/p&gt;
&lt;p&gt;One option is that the record has everything needed to perform the work.
Another is that it only has a reference to the data, and the app reads the data
from an external source.&lt;/p&gt;
&lt;p&gt;For simple events, we can put everything into the job. For example, we could
generate an event whenever someone fails to log into the system. It might
be an account ID, email address, timestamp, and metadata like IP address and
browser. An anti-fraud system could look at that and identify that we are under
attack from bots.&lt;/p&gt;
&lt;p&gt;The data could be huge, though, with a complex schema. For example, when we
create a new item for sale, we probably don't want to serialize it to JSON and
put it into Kafka. The processes reading the record would need to understand
the format and extract the parts they care about. Evolving the schema over time
is hard, and we risk getting out of sync, resulting in many parse failures (see
the DLQ discussion above).&lt;/p&gt;
&lt;p&gt;If everything is on the same system, then we just need a database ID, and the
processor can read from the same database using, e.g., the same ActiveRecord
models.&lt;/p&gt;
&lt;p&gt;In a services architecture, we create services that logically own data and
other applications can access the service via an API to get the data they need.
For example, we might have a Customer service that handles registrations and
manages information such as shipping addresses and payment methods. The
service would publish an event for a new registration and another whenever
it changes. The event includes the customer's unique identifier that
API consumers can use to access the data. Similarly, we can have a Catalog
service that manages information about items for sale.&lt;/p&gt;
&lt;p&gt;We could use gRPC for service-to-service communication, as it gives better
performance, but the schema is relatively rigid. Using a GraphQL API may make
clients more resilient to schema updates.&lt;/p&gt;
&lt;p&gt;Another option is a “change data capture” stream. Whenever the system of record
changes, it sends an update with the change. For example, if a customer changes
their delivery address in a Customer service, the event might include the new
address as a key/value pair. Receiving systems can then handle the update
directly.&lt;/p&gt;
&lt;p&gt;A similar pattern can be used for systems that require an audit trail that
indicates who made a specific change. Kafka is ideal for this, as it provides
an immutable record of changes over time, as opposed to retaining only the
current value.&lt;/p&gt;
&lt;h2&gt;Events&lt;/h2&gt;
&lt;p&gt;A relatively generic event schema could be as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Event type, e.g., &lt;code&gt;CUSTOMER_CREATED&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Key/value data related to the event, e.g., &lt;code&gt;customer_id&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Timestamp&lt;/li&gt;
&lt;li&gt;Source, e.g., &lt;code&gt;CUSTOMER_SERVICE&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As an example, the order process could create an &lt;code&gt;ITEM_SOLD&lt;/code&gt; event, and
potentially multiple consumer processes would be interested in it. So one might
read the event, read the details of the item, then write a commissions table.
Similarly, publishing a product could trigger indexing Elasticsearch for it.&lt;/p&gt;
&lt;h2&gt;Event sourcing&lt;/h2&gt;
&lt;p&gt;Event sourcing is an architecture where a system records a series of updates,
and other systems consume those updates to keep their own state of the world.&lt;/p&gt;
&lt;p&gt;This is particularly helpful when tracking a series of changes over
time. For hundreds of years, this has been the approach used for a financial
ledger. For example, a bank account tracks deposits and withdrawals over time,
keeping a running balance.&lt;/p&gt;
&lt;p&gt;Ledgers are great for financial systems but may be too much detail for
inter-service communication.&lt;/p&gt;
&lt;p&gt;It is common to use this approach in hospital information systems to
synchronize between systems. When a patient registers, it creates a new patient
record, and any systems that might need to interact with patients (e.g., the
pharmacy) can create a corresponding record in their system. If a change or
deletion occurs, the patient management system can send an update message, and
the systems can update their records.&lt;/p&gt;
&lt;h2&gt;Data ownership&lt;/h2&gt;
&lt;p&gt;Generally speaking, we would prefer to have one system that owns the data and
keeps a single point of truth. This system generates events when it changes.&lt;/p&gt;
&lt;p&gt;In a services architecture, however, we might have multiple systems that
generate events based on their responsibility, and we need to put the pieces
together to get the whole picture.&lt;/p&gt;
&lt;p&gt;For example, we might have a catalog that owns product items, an order
management system that records sales of items, and a logistics system that
records shipments of the orders. These systems generate change events that may
be incidental to their internal processing, but are useful triggers for other
systems.&lt;/p&gt;
&lt;p&gt;When a customer buys a product, we might need to update the quantity on hand in
an inventory management system. That might trigger the product detail page
to show a different availability date.&lt;/p&gt;
&lt;p&gt;We might also separate internal systems from the services needed to run the
public website, improving public site reliability and scalability.&lt;/p&gt;
&lt;p&gt;It can be helpful to think about the business processes as a flow of data,
coordinated between multiple systems. For example, we have the internal
processes associated with defining products, then we "publish" them to
the website. At that point, the public catalog, order handling, and customer
management systems become critical. After an order is placed, internal fulfillment
and accounting systems take over.&lt;/p&gt;
&lt;p&gt;In between, some processes need to manage product data as a whole, e.g.,
merchandising looks for products in the Catalog which match conditions and
create a marketing campaign for them. Machine learning processes may analyze
all the products in the database to determine pricing, group similar products,
create suggestions, or optimize product descriptions for SEO.&lt;/p&gt;</content><category term="Programming"/><category term="kafka"/><category term="resque"/><category term="oban"/><category term="RoR"/><category term="elixir"/></entry><entry><title>Kubernetes Health Checks for Elixir Apps</title><link href="https://www.cogini.com/blog/kubernetes-health-checks-for-elixir-apps/" rel="alternate"/><published>2024-01-01T00:00:00+08:00</published><updated>2024-01-01T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2024-01-01:/blog/kubernetes-health-checks-for-elixir-apps/</id><summary type="html">&lt;p&gt;Health checks are an important part of making your application reliable and
manageable in production. They can also help make development with containers
faster.&lt;/p&gt;
&lt;h2&gt;Kubernetes health checks&lt;/h2&gt;
&lt;p&gt;Kubernetes has well-defined semantics for how health checks should behave,
distinguishing between "startup", "liveness", and "readiness".&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Liveness&lt;/strong&gt; is the core health check. It …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Health checks are an important part of making your application reliable and
manageable in production. They can also help make development with containers
faster.&lt;/p&gt;
&lt;h2&gt;Kubernetes health checks&lt;/h2&gt;
&lt;p&gt;Kubernetes has well-defined semantics for how health checks should behave,
distinguishing between "startup", "liveness", and "readiness".&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Liveness&lt;/strong&gt; is the core health check. It determines whether the app is alive
and able to respond to requests. It should be relatively fast, as it is called
frequently, but should include checks for dependencies, e.g., whether the app
can connect to a database or back-end service. If the liveness check fails for
a specified period, Kubernetes kills and replaces the instance.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Startup&lt;/strong&gt; checks whether the app has finished booting up. It is useful when
the app may take significant time to start, e.g., because it loads data
from a database into a cache. Separating this from liveness allows us to use
different timeouts rather than making the liveness timeout long enough to
support startup. Once startup has completed successfully, Kubernetes does not
call it again, it uses the liveness check.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Readiness&lt;/strong&gt; checks whether the app should receive requests. Kubernetes uses
it to decide whether to route traffic to the instance. If the readiness
probe fails, Kubernetes doesn't kill and restart the container. Instead it
marks the pod as "unready" and stops sending traffic to it, e.g., in the
ingress. It is useful to be able to temporarily stop serving traffic, e.g.,
when the instance is overloaded or it has transient problems connecting to a
back-end service.&lt;/p&gt;
&lt;p&gt;Kubernetes checks themselves rely only on the HTTP response code to determine
service health. A code greater than or equal to 200 and less than 400 indicates
success, and any other code indicates failure. While Kubernetes treats the
health check response as binary, i.e., ok or not, the health check can return
additional information about the cause of the error, making troubleshooting
easier for developers or ops staff. This might be formatted as a simple string
or JSON format, e.g. &lt;code&gt;{"status": "OK"}&lt;/code&gt; / &lt;code&gt;{"status": "error", "code": 503,
"reason": "timeout connecting to downstream service"}&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In addition, the service should generally write a message to the log or add
information to a trace, allowing people to find and debug systems that are having
problems.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://github.com/cogini/kubernetes_health_check"&gt;kubernetes_health_check&lt;/a&gt;
project provides a Plug that handles Kubernetes health check requests.
It is driven by a module that is custom to the app. Following is an example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="kd"&gt;defmodule&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Example.Health&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@moduledoc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="sh"&gt;&amp;quot;&amp;quot;&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="sh"&gt;Collect app status for Kubernetes health checks.&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="sh"&gt;&amp;quot;&amp;quot;&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;alias&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Example.Repo&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:example&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@repos&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;compile_env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:ecto_repos&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;||&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@doc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="sh"&gt;&amp;quot;&amp;quot;&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="sh"&gt;Check if the app has finished booting up.&lt;/span&gt;

&lt;span class="sh"&gt;  This returns app status for the Kubernetes `startupProbe`.&lt;/span&gt;
&lt;span class="sh"&gt;  Kubernetes checks this probe repeatedly until it returns a successful&lt;/span&gt;
&lt;span class="sh"&gt;  response. After that Kubernetes switches to executing the other two probes.&lt;/span&gt;
&lt;span class="sh"&gt;  If the app fails to successfully start before the `failureThreshold` time is&lt;/span&gt;
&lt;span class="sh"&gt;  reached, Kubernetes kills the container and restarts it.&lt;/span&gt;

&lt;span class="sh"&gt;  For example, this check might return OK when the app has started the&lt;/span&gt;
&lt;span class="sh"&gt;  web-server, connected to a DB, connected to external services, and performed&lt;/span&gt;
&lt;span class="sh"&gt;  initial setup tasks such as loading a large cache.&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="sh"&gt;&amp;quot;&amp;quot;&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@spec&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;startup&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;non_neg_integer&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;binary&lt;/span&gt;&lt;span class="p"&gt;()}}&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;binary&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;startup&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Return error if there are available migrations which have not been executed.&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# This supports deployment to AWS ECS using the following strategy:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# https://engineering.instawork.com/elegant-database-migrations-on-ecs-74f3487da99f&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;#&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# By default Elixir migrations lock the database migration table, so they&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# will only run from a single instance.&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;migrations&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="na"&gt;@repos&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;|&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Enum&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nc"&gt;Ecto.Migrator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;migrations&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;|&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;flatten&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Enum&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;empty?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;migrations&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="n"&gt;liveness&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;else&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Database not migrated&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@doc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="sh"&gt;&amp;quot;&amp;quot;&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="sh"&gt;Check if the app is alive and working properly.&lt;/span&gt;

&lt;span class="sh"&gt;  This returns app status for the Kubernetes `livenessProbe`.&lt;/span&gt;
&lt;span class="sh"&gt;  Kubernetes continuously checks if the app is alive and working as expected.&lt;/span&gt;
&lt;span class="sh"&gt;  If it crashes or becomes unresponsive for a specified period of time,&lt;/span&gt;
&lt;span class="sh"&gt;  Kubernetes kills and replaces the container.&lt;/span&gt;

&lt;span class="sh"&gt;  This check should be lightweight, only determining if the server is&lt;/span&gt;
&lt;span class="sh"&gt;  responding to requests and can connect to the DB.&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="sh"&gt;&amp;quot;&amp;quot;&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@spec&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;liveness&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;non_neg_integer&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;binary&lt;/span&gt;&lt;span class="p"&gt;()}}&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;binary&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;liveness&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Ecto.Adapters.SQL&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Repo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;SELECT 1&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;%{&lt;/span&gt;&lt;span class="ss"&gt;num_rows&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]]}}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;

&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;inspect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;rescue&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;inspect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@doc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="sh"&gt;&amp;quot;&amp;quot;&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="sh"&gt;Check if app should be serving public traffic.&lt;/span&gt;

&lt;span class="sh"&gt;  This returns app status for the Kubernetes `readinessProbe`.&lt;/span&gt;
&lt;span class="sh"&gt;  Kubernetes continuously checks if the app should serve traffic. If the&lt;/span&gt;
&lt;span class="sh"&gt;  readiness probe fails, Kubernetes doesn&amp;#39;t kill and restart the container,&lt;/span&gt;
&lt;span class="sh"&gt;  instead it marks the pod as &amp;quot;unready&amp;quot; and stops sending traffic to it, e.g.,&lt;/span&gt;
&lt;span class="sh"&gt;  in the ingress.&lt;/span&gt;

&lt;span class="sh"&gt;  This is useful to temporarily stop serving requests. For example, if the app&lt;/span&gt;
&lt;span class="sh"&gt;  gets a timeout connecting to a back end service, it might return an error for&lt;/span&gt;
&lt;span class="sh"&gt;  the readiness probe. After multiple failed attempts, it would switch to&lt;/span&gt;
&lt;span class="sh"&gt;  returning false for the `livenessProbe`, triggering a restart.&lt;/span&gt;

&lt;span class="sh"&gt;  Similarly, the app might return an error if it is overloaded, shedding&lt;/span&gt;
&lt;span class="sh"&gt;  traffic until it has caught up.&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="sh"&gt;&amp;quot;&amp;quot;&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@spec&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;readiness&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;non_neg_integer&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;binary&lt;/span&gt;&lt;span class="p"&gt;()}}&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;binary&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;readiness&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;liveness&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@spec&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;basic&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# | {:error, {status_code :: non_neg_integer(), reason :: binary()}}&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# | {:error, reason :: binary()}&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;basic&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Dependencies&lt;/h2&gt;
&lt;p&gt;Services also need health checks for the services they depend on.
In development, these might be databases or Kafka running in a container.
In production, those might be managed services in AWS.&lt;/p&gt;
&lt;p&gt;For services that do not provide an HTTP API, we can define a command that
runs within the container.&lt;/p&gt;
&lt;p&gt;For example, this probe checks a Postgres database container:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;readinessProbe&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;pg_isready&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;10&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;periodSeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;5&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;2&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;failureThreshold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;20&lt;/span&gt;

&lt;span class="nt"&gt;livenessProbe&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;psql&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;-w&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;-U&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;postgres&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;-d&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;my-db&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;-c&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;SELECT&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;periodSeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;10&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;2&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;failureThreshold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;For historical reasons, Kubernetes checks are different from Docker
&lt;code&gt;healthcheck&lt;/code&gt; definitions.&lt;/p&gt;
&lt;p&gt;In &lt;code&gt;docker-compose.yml&lt;/code&gt;, health checks look like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="nt"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;3.9&amp;quot;&lt;/span&gt;
&lt;span class="nt"&gt;services&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;deploy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;example-service&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;healthcheck&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;CMD&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;curl&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;http://127.0.0.1:4001/healthz&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;start_period&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;6s&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;2s&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;5s&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;retries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;20&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;depends_on&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;postgres&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;condition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;service_healthy&lt;/span&gt;

&lt;span class="nt"&gt;postgres&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;postgres:14.1-alpine&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;restart&lt;/span&gt;&lt;span class="p p-Indicator"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;always&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;healthcheck&lt;/span&gt;&lt;span class="p p-Indicator"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;CMD-SHELL&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;pg_isready&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;start_period&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;5s&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;2s&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;5s&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;retries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;20&lt;/span&gt;

&lt;span class="nt"&gt;router&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;ghcr.io/apollographql/router:v1.2.1&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="c1"&gt;# GraphQL endpoint&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;4000:4000&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="c1"&gt;# Health check&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;8088:8088&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="c1"&gt;# https://www.apollographql.com/docs/router/configuration/overview&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;APOLLO_ROUTER_LOG&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;debug&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;APOLLO_ROUTER_SUPERGRAPH_PATH&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;/dist/schema/local.graphql&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;APOLLO_ROUTER_CONFIG_PATH&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;/router.yaml&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;APOLLO_ROUTER_HOT_RELOAD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;true&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;./apollo-router.yml:/router.yaml&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;./supergraph.graphql:/dist/schema/local.graphql&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;healthcheck&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;CMD-SHELL&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;curl&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;-v&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;--fail&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;http://127.0.0.1:8088/health&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;start_period&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;5s&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;2s&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;5s&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;retries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;20&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;depends_on&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;deploy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;condition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;service_healthy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;With containerized tests, we might run tests via &lt;code&gt;docker-compose&lt;/code&gt;. The external
API tests bring up the app container, containers for other services that it
depends on, associated database containers, and the Apollo Router container.
They then run tests using Postman/Newman. Then we can run &lt;code&gt;docker-compose up
router&lt;/code&gt; to bring up all the containers, waiting until they are up and healthy,
and then run Newman tests on it.&lt;/p&gt;
&lt;p&gt;Robust health checks for each component in the stack help the system to come up
quickly and reliably, and they provide messages that help us debug startup
failures easily.&lt;/p&gt;
&lt;h3&gt;Running OS commands&lt;/h3&gt;
&lt;p&gt;Instead of external HTTP checks, we can instead execute a health check inside
the container. That may use curl to call the app on localhost.
While this does not exercise the HTTP stack, it may be more
&lt;a href="https://github.com/kubernetes/kubernetes/issues/89898"&gt;reliable&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;An example Kubernetes check which runs a command is&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;livenessProbe&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;/app/grpc-health-probe&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;-addr=:50051&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;-connect-timeout=5s&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;-rpc-timeout=5s&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;failureThreshold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;3&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;60&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;periodSeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;10&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;successThreshold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;1&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;10&lt;/span&gt;
&lt;span class="nt"&gt;readinessProbe&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;/app/grpc-health-probe&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;-addr=:50051&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;-connect-timeout=5s&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;-rpc-timeout=5s&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;failureThreshold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;3&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;1&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;periodSeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;10&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;successThreshold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;1&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;When an Elixir app is running via releases, we can evaluate code directly to
run the health check, e.g.:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;healthcheck&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;CMD&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;bin/api&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;eval&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;API.Health.liveness()&amp;quot;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;start_period&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;2s&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;1s&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;1s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Higher-level checks&lt;/h2&gt;
&lt;p&gt;The above health checks are used at the infrastructure level to identify
problems. They help Kubernetes automatically resolve problems by
restarting containers, scaling resources, etc.&lt;/p&gt;
&lt;p&gt;We can also add production checks that indicate end-customer visible problems.
For example, if a user can’t log in, then we should alert. Some of these can be
metrics, e.g., if we would normally get 100 successful logins a minute, and now
we are getting 0, then there is a problem. Or we might get 1% unsuccessful
logins in a period, and now we are getting 50%.&lt;/p&gt;
&lt;p&gt;We use external API tests as part of &lt;a href="/blog/breaking-up-the-monolith-building-testing-and-deploying-microservices/"&gt;containerized testing in CI&lt;/a&gt;.
We can leverage these tests to make production health checks for standard
scenarios across multiple services, e.g., a customer logs in, adds an item to
their cart, and then checks out.&lt;/p&gt;
&lt;p&gt;See the following articles for more background information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/"&gt;https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://shyr.io/blog/kubernetes-health-probes-elixir"&gt;https://shyr.io/blog/kubernetes-health-probes-elixir&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.redhat.com/blog/2020/11/10/you-probably-need-liveness-and-readiness-probes#"&gt;Kubernetes-compatible&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content><category term="Programming"/><category term="kubernetes"/><category term="elixir"/><category term="phoenix"/><category term="health checks"/><category term="ecs"/></entry><entry><title>Presentation on thinking functionally in Elixir 2020</title><link href="https://www.cogini.com/blog/presentation-on-thinking-functionally-in-elixir-2020/" rel="alternate"/><published>2020-12-15T00:00:00+08:00</published><updated>2020-12-15T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2020-12-15:/blog/presentation-on-thinking-functionally-in-elixir-2020/</id><content type="html">&lt;p&gt;Here are the slides for the &lt;a href="https://www.cogini.com/files/elixir-thinking-functionally-2020.pdf"&gt;presentation on thinking functionally in
Elixir&lt;/a&gt; I gave to the local Elixir
user's group.&lt;/p&gt;</content><category term="Development"/><category term="elixir"/><category term="erlang"/><category term="presentations"/><category term="functional programming"/></entry><entry><title>Choosing a Linux distribution</title><link href="https://www.cogini.com/blog/choosing-a-linux-distribution/" rel="alternate"/><published>2020-01-13T00:00:00+08:00</published><updated>2020-01-13T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2020-01-13:/blog/choosing-a-linux-distribution/</id><summary type="html">&lt;h2&gt;Short answer: Use Ubuntu LTS&lt;/h2&gt;
&lt;h2&gt;Long answer&lt;/h2&gt;
&lt;p&gt;There are two main families, RedHat and Debian. RedHat traditionally comes from
the corporate world, and Debian from the free software community. I have been
using Linux since 1993, so I will give a bit of a history lesson to explain the
motivation …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Short answer: Use Ubuntu LTS&lt;/h2&gt;
&lt;h2&gt;Long answer&lt;/h2&gt;
&lt;p&gt;There are two main families, RedHat and Debian. RedHat traditionally comes from
the corporate world, and Debian from the free software community. I have been
using Linux since 1993, so I will give a bit of a history lesson to explain the
motivation behind the popular distributions.&lt;/p&gt;
&lt;p&gt;RedHat was one of the first major commercial Linux distributions, and is the
most successful. They started by selling CDs which you could install on as many
servers as you like, hoping that larger customers would buy support contracts.
That didn't work, so they switched to a per-server licensing model.  When that
happened, volunteers took the source code from RedHat Enterprise Linux and made
their own releases. Because they couldn't use the RedHat trademark, they called
it something else. There was always a bit of a delay with releases, but it
worked fine.&lt;/p&gt;
&lt;p&gt;There are a couple of distributions like this, with more or less value add.
&lt;a href="https://www.centos.org/"&gt;CentOS&lt;/a&gt; is the most pure. A few years ago, RedHat
bought CentOS, so it's now an official part of RedHat, formalizing the model
and giving them more resources. RedHat also has a "bleeding edge" free
distribution called Fedora. It's popular with enthusiasts to run on the
desktop, but is not commonly run on servers.&lt;/p&gt;
&lt;p&gt;Debian is one of the first non-commercial Linux distributions, and is the most
popular.  Traditionally many of the people who actually write open source
packages ran Debian, and it was used by expert users who ran e.g. internet
service providers.  It is generally high quality, and has the most software
packages.  At one point, most of the core users were running a "rolling update"
version of Debian. Since it is run by volunteers, and there was nobody who
cared that much, it once went more than two years without a formal release.
Ubuntu was started by a .com millionaire who wanted to give back to the open
source community. Ubuntu was basically "Debian with regular releases."&lt;/p&gt;
&lt;p&gt;RedHat focused on the kernel and making stable, supported releases for
enterprise customers to run on their servers. They traditionally hire many of
the core Linux kernel developers. Ubuntu focused on building a good Linux
desktop experience. They offer predictable releases and commercial support, and
work with partners to certify Debian.&lt;/p&gt;
&lt;p&gt;With this background, you can see a bit of the strengths and weaknesses of the
different distributions. Here is the current situation:&lt;/p&gt;
&lt;p&gt;The majority of enterprise servers run CentOS. If an enterprise needs
commercial support, then they pay for RedHat. Many commercial software
packages like Oracle were certified and supported on RedHat. Most dedicated
servers run CentOS by default, and it may be the only distribution a host supports.
Oracle created their own version of RedHat, competing with RedHat for support
contracts. Amazon Linux is based on RedHat. The specific differences between
Amazon Linux and RedHat are unclear, and it's a bit of a moving target. You
can't practically run Amazon Linux outside of Amazon.&lt;/p&gt;
&lt;p&gt;Ubuntu made deals with various partners like Amazon. It is a first class
supported distribution on AWS, and is very popular there. Ubuntu has largely
abandoned its desktop ambitions and is focused on the server side. Rather than
try to do proprietary development, it focuses on packaging upstream software in
a way that works well.&lt;/p&gt;
&lt;h2&gt;Recommendations&lt;/h2&gt;
&lt;p&gt;I have personally run both distributions for many years, and my preferences have
shifted back and forth over time. We manage hundreds of virtual and
dedicated servers, and we are currently about 50% CentOS and 50% Ubuntu.&lt;/p&gt;
&lt;p&gt;Until recently, my preference was CentOS 7, for these reasons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It is designed for servers, and has a long term support model so I don't have
  to upgrade frequently&lt;/li&gt;
&lt;li&gt;It doesn't cost money, but RedHat has a sustainable business model behind it&lt;/li&gt;
&lt;li&gt;RedHat employs many of the core kernel developers, making it solid from the
  bottom up. Ubuntu has traditionally been weaker at kernel support (and I have
  the scars from it)&lt;/li&gt;
&lt;li&gt;I can run it everywhere, on dedicated servers, cloud instances and in local
  dev environments&lt;/li&gt;
&lt;li&gt;Since Amazon Linux is based on RedHat, their software agents work well with
  it&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;CentOS has some disadvantages relative to Debian. It has fewer OS packages and
relatively old versions. This is the downside of stability.  RedHat supports
fewer software packages, with fewer options. You may need to find third party
repositories to get software or build your own.  It's rare, though, as the
packages you need for server use are generally there.&lt;/p&gt;
&lt;p&gt;Over the long support life of the distro, packages can get pretty old.  For
most packages, e.g. mail server software, it doesn't matter. For things like
the database, you may want newer features. Major projects like PostgreSQL have
their &lt;a href="https://www.postgresql.org/download/linux/redhat/"&gt;own supported packages&lt;/a&gt;,
which lets us stay up to date.  We generally stick to the EPEL repo. You can
often download random packages off the net, but it feels a bit questionable
sometimes. Why run a "supported" enterprise distro then rely on packages from
some random dude?&lt;/p&gt;
&lt;p&gt;The long term support for a server is much more important in a bare-metal
world, where upgrading can be traumatic. You want to be able to regularly
install security packages without worrying that it will take down the server.&lt;/p&gt;
&lt;p&gt;In the cloud, however, it's easy for us to set up a new instance with the latest OS,
verify that it works, then switch. For that, I prefer Ubuntu.&lt;/p&gt;
&lt;p&gt;It's a solid, well supported distro. The community is more friendly for
beginners, partly coming from the community nature of Debian, partly from the
desktop focus. It has access to all the Debian packages, with a commercial
model and regular releases. Packages are generally more up to date than CentOS,
and tend to have a more direct line from upstream projects.&lt;/p&gt;
&lt;p&gt;We normally use &lt;a href="https://www.packer.io/"&gt;Packer&lt;/a&gt; and
&lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt; to build an AMI specifically for an
application. When we need to update the AMI, we can run it through the same
CI/CD process that we use for the app, running tests and then deploying. If
there is a problem, we can roll it back, same way we would with an application
issue.&lt;/p&gt;
&lt;p&gt;This works well for applications which are under continuous development.
We can run the LTS (Long Term Support) versions of Ubuntu or more frequent
releases. They are supported for long enough to keep things stable, but we get
access to the latest versions of software.&lt;/p&gt;
&lt;p&gt;Ubuntu is well supported in cloud environments. We generally run &lt;a href="https://wiki.ubuntu.com/Minimal"&gt;Minimal
Ubuntu&lt;/a&gt;, which gives us a small install while
still staying compatible with other software.&lt;/p&gt;
&lt;p&gt;I prefer to run the community AMIs instead of marketplace AMIs, as it avoids
licencing weirdness keeping me from being able to recover a machine when it
won't boot.&lt;/p&gt;
&lt;h2&gt;The future&lt;/h2&gt;
&lt;p&gt;Long term, the business model of selling Linux is questionable. Cloud providers
want to "commoditize the common good", i.e. they sell you machine hours, so
they want software to be free. Instead of running e.g. Oracle on RedHat, they
would prefer us to use &lt;a href="https://aws.amazon.com/rds/"&gt;RDS&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;RedHat recently sold to IBM. I would not be surprised if Ubuntu sells to
Microsoft next.  We will be left with a community support model, and Debian is
the best working example of that right now. The community can occasionally be
dysfunctional, driven by politics, the "United Nations" of free software. It
works, though, and I expect it to continue.&lt;/p&gt;
&lt;p&gt;The future is containers, where we run only the minimum parts of the operating
system necessary to support a specific application. There are specialized
distros like Alpine, but I still prefer to run Minimal Ubuntu. It's reasonably
small (about 30 MB), but compatible with regular Ubuntu, making development
and testing easier.&lt;/p&gt;</content><category term="DevOps"/><category term="linux"/></entry><entry><title>Deploying an Elixir app to Digital Ocean with mix_deploy</title><link href="https://www.cogini.com/blog/deploying-an-elixir-app-to-digital-ocean-with-mix_deploy/" rel="alternate"/><published>2020-01-13T00:00:00+08:00</published><updated>2020-01-13T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2020-01-13:/blog/deploying-an-elixir-app-to-digital-ocean-with-mix_deploy/</id><summary type="html">&lt;p&gt;A gentle introduction to getting your Elixir / Phoenix app up and running on a server at &lt;a href="https://m.do.co/c/150575a88316"&gt;Digital Ocean&lt;/a&gt;.&lt;/p&gt;</summary><content type="html">&lt;p&gt;This is a gentle introduction to getting your Elixir / Phoenix app up and
running on a server at &lt;a href="https://m.do.co/c/150575a88316"&gt;Digital Ocean&lt;/a&gt; (affiliate link).
It starts from zero, assuming minimal experience with servers.&lt;/p&gt;
&lt;p&gt;It builds the app on the server, then uses Erlang releases to run the code
under systemd. It uses the &lt;a href="https://github.com/cogini/mix_deploy"&gt;mix_deploy&lt;/a&gt;
library to handle deployment tasks.&lt;/p&gt;
&lt;p&gt;Digital Ocean's smallest $5/month plan &lt;a href="/blog/benchmarking-phoenix-on-digital-ocean/"&gt;runs Elixir great&lt;/a&gt;.
This guide uses their &lt;a href="https://www.digitalocean.com/products/managed-databases/"&gt;managed databases&lt;/a&gt; service so
you don't need to manage the database.&lt;/p&gt;
&lt;p&gt;We will be using a boilerplate Phoenix project with PostgreSQL database. It assumes
you are running macOS on your dev machine and Ubuntu 18.04 on the server.&lt;/p&gt;
&lt;p&gt;These instructions are based on this &lt;a href="https://github.com/cogini/mix-deploy-example"&gt;working example&lt;/a&gt; application
and the principles described in the blog post "&lt;a href="/blog/best-practices-for-deploying-elixir-apps/"&gt;Best practices for deploying Elixir apps&lt;/a&gt;".&lt;/p&gt;
&lt;p&gt;This post includes basic instructions to prepare your existing Elixir/Phoenix application for deployment using mix_deploy.
See &lt;a href="https://github.com/cogini/mix-deploy-example#preparing-an-existing-project-for-deployment"&gt;preparing an existing project for deployment&lt;/a&gt;
for more details.&lt;/p&gt;
&lt;p&gt;If you have any questions, contact me on the Elixir Slack at &lt;code&gt;jakemorrison&lt;/code&gt; or
open an issue &lt;a href="https://github.com/cogini/mix-deploy-example"&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Overall approach&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Create the server&lt;/li&gt;
&lt;li&gt;Configure ssh&lt;/li&gt;
&lt;li&gt;Configure the build / deploy user&lt;/li&gt;
&lt;li&gt;Check out code on the server from git and build a release&lt;/li&gt;
&lt;li&gt;Deploy the release&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You can first get the template running, then prepare your own project for deployment.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: This guide works with Ubuntu 18.04, CentOS 7, Ubuntu 16.04, and Debian
9.4.  If you are
&lt;a href="/blog/choosing-a-linux-distribution/"&gt;not sure which distro to use&lt;/a&gt;,
choose Ubuntu 18.04. The approach here works for dedicated servers and cloud
instances as well.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The actual work of building and deploying releases is handled by simple shell
scripts which you run on the build server or from your dev machine via ssh, e.g.:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ssh&lt;span class="w"&gt; &lt;/span&gt;-A&lt;span class="w"&gt; &lt;/span&gt;deploy@web-server
&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;build/mix-deploy-example
git&lt;span class="w"&gt; &lt;/span&gt;pull

&lt;span class="c1"&gt;# Build release&lt;/span&gt;
bin/build

&lt;span class="c1"&gt;# Extract release to target directory on local machine, creating current symlink&lt;/span&gt;
bin/deploy-release

&lt;span class="c1"&gt;# Run database migrations&lt;/span&gt;
bin/deploy-migrate

&lt;span class="c1"&gt;# Restart the systemd unit&lt;/span&gt;
sudo&lt;span class="w"&gt; &lt;/span&gt;bin/deploy-restart
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Create the server&lt;/h2&gt;
&lt;p&gt;Go to &lt;a href="https://m.do.co/c/150575a88316"&gt;Digital Ocean&lt;/a&gt; (affiliate link) and
create a Droplet (virtual server).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Choose an image&lt;/strong&gt;: Choose Ubuntu 18.04&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose a plan&lt;/strong&gt;: Standard is fine&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose a size&lt;/strong&gt;: The smallest, $5/month Droplet is fine&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose a datacenter region&lt;/strong&gt;: Select a data center near you&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Add your SSH keys&lt;/strong&gt;: Select the "New SSH Key" button, and paste the
  contents of your &lt;code&gt;~/.ssh/id_rsa.pub&lt;/code&gt; file.
  &lt;a href="https://www.cogini.com/blog/creating-an-ssh-key/"&gt;Create an ssh key&lt;/a&gt;, if you
  don't have one already.
  On Mac OS, you can copy your SSH key to clipboard by running &lt;code&gt;cat ~/.ssh/id_rsa.pub | pbcopy&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose a hostname&lt;/strong&gt;: The default name is fine, but awkward to remember and type.
  Use "web-server" or whatever you like&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The defaults for everything else are fine. Click the "Create" button.&lt;/p&gt;
&lt;h2&gt;Configure ssh to talk to your server&lt;/h2&gt;
&lt;p&gt;Note the IP address of your new droplet in the Digital Ocean UI.&lt;/p&gt;
&lt;p&gt;Configure &lt;code&gt;~/.ssh/config&lt;/code&gt; on your local dev machine so you can
&lt;a href="/blog/configure-ssh-to-connect-to-a-server"&gt;connect to the server&lt;/a&gt;.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Change&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;IP&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;address&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;below&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;actual&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;IP&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;address&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Droplet&lt;/span&gt;
&lt;span class="nx"&gt;Host&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nx"&gt;HostName&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m m-Double"&gt;123.45.67.89&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Create a deploy user on the web server&lt;/h2&gt;
&lt;p&gt;For security, we use two operating system user accounts, the &lt;code&gt;deploy&lt;/code&gt; user to
build and deploy the app, and the &lt;code&gt;app&lt;/code&gt; user to run the app.&lt;/p&gt;
&lt;p&gt;Connect to the web server as root:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ssh&lt;span class="w"&gt; &lt;/span&gt;root@web-server
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Create the &lt;code&gt;deploy&lt;/code&gt; user:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;useradd&lt;span class="w"&gt; &lt;/span&gt;-m&lt;span class="w"&gt; &lt;/span&gt;-s&lt;span class="w"&gt; &lt;/span&gt;/bin/bash&lt;span class="w"&gt; &lt;/span&gt;deploy
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Configure &lt;code&gt;sudo&lt;/code&gt; to allow the &lt;code&gt;deploy&lt;/code&gt; user run commands as root without a password:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;deploy ALL=(ALL) NOPASSWD:ALL&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;/etc/sudoers.d/10-app-deploy
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;There are more sophisticated ways to manage users. We normally
&lt;a href="/blog/managing-user-accounts-with-ansible/"&gt;manage user accounts with Ansible&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Configure ssh access to the &lt;code&gt;deploy&lt;/code&gt; user.&lt;/h2&gt;
&lt;p&gt;Create the &lt;code&gt;.ssh&lt;/code&gt; directory and set permissions:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mkdir&lt;span class="w"&gt; &lt;/span&gt;-p&lt;span class="w"&gt; &lt;/span&gt;~deploy/.ssh
chown&lt;span class="w"&gt; &lt;/span&gt;deploy:deploy&lt;span class="w"&gt; &lt;/span&gt;~deploy/.ssh
chmod&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;700&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;~deploy/.ssh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Allow the ssh key you set for the droplet root user to log into the &lt;code&gt;deploy&lt;/code&gt;
user account:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cp&lt;span class="w"&gt; &lt;/span&gt;~root/.ssh/authorized_keys&lt;span class="w"&gt; &lt;/span&gt;~deploy/.ssh
chown&lt;span class="w"&gt; &lt;/span&gt;deploy:deploy&lt;span class="w"&gt; &lt;/span&gt;~deploy/.ssh/authorized_keys
chmod&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;~deploy/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Exit the ssh session and connect again using the &lt;code&gt;deploy&lt;/code&gt; user.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ssh&lt;span class="w"&gt; &lt;/span&gt;-A&lt;span class="w"&gt; &lt;/span&gt;deploy@web-server
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If it doesn't work, check that the ssh key on your dev machine
(&lt;code&gt;.ssh/id_rsa.pub&lt;/code&gt;) is in the &lt;code&gt;~/.ssh/authorized_keys&lt;/code&gt; file for the &lt;code&gt;deploy&lt;/code&gt; user and
check the file permissions. Try with &lt;code&gt;-vv&lt;/code&gt; or look at &lt;code&gt;/var/log/auth.log&lt;/code&gt; on
the server.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;-A&lt;/code&gt; flag on the ssh command gives the session on the server access to your
local ssh keys without copying them to the server. If your local user can
access a GitHub repo, then you can do it on the server.&lt;/p&gt;
&lt;p&gt;Make sure that the &lt;code&gt;deploy&lt;/code&gt; user can run commands with sudo:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;-s
&lt;span class="nb"&gt;exit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Check out the app source&lt;/h2&gt;
&lt;p&gt;As the &lt;code&gt;deploy&lt;/code&gt; user on the build machine, create the build dir:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mkdir&lt;span class="w"&gt; &lt;/span&gt;-p&lt;span class="w"&gt; &lt;/span&gt;~/build
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Check out the app source:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;~/build
git&lt;span class="w"&gt; &lt;/span&gt;clone&lt;span class="w"&gt; &lt;/span&gt;https://github.com/cogini/mix-deploy-example&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c1"&gt;# or your app repo&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;mix-deploy-example
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Install build dependencies&lt;/h2&gt;
&lt;p&gt;In order to build the app, we need to install Erlang, Elixir and Node.js on the
build server. Run the following script to install Erlang, Elixir and
Node.js from OS packages:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;bin/build-install-deps-ubuntu
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;See the &lt;a href="https://elixir-lang.org/install.html"&gt;instructions on the Elixir website&lt;/a&gt; for more details on installing
Elixir and dependencies.&lt;/p&gt;
&lt;p&gt;We generally use &lt;a href="/blog/using-asdf-with-elixir-and-phoenix/"&gt;ASDF&lt;/a&gt; to manage
build tools. That allows us to precisely specify versions and install multiple
versions at once.&lt;/p&gt;
&lt;h2&gt;Create the database&lt;/h2&gt;
&lt;p&gt;Most apps use a database. You can install the database on the same Droplet
as you run your app. It works fine and is cheaper, but then you have to manage
the db. This guide uses Digital Ocean's
&lt;a href="https://www.digitalocean.com/products/managed-databases/"&gt;managed databases&lt;/a&gt; service.&lt;/p&gt;
&lt;p&gt;In the Digital Ocean UI, select &lt;em&gt;Create &amp;rarr; Databases&lt;/em&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Choose a database engine&lt;/strong&gt;: Select PostgreSQL 11&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose a cluster configuration&lt;/strong&gt;: $15/month is fine&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose a datacenter&lt;/strong&gt;: Use the same data center as your droplet&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose a unique database cluster name&lt;/strong&gt;: The default name is fine&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;While the database is being created, in the &amp;ldquo;Getting started&amp;rdquo;
section of the page, click the bullet point that says &amp;ldquo;Secure this
database cluster.&amp;rdquo; Under &amp;ldquo;Restrict inbound connections&amp;rdquo;
select your droplet and click &amp;ldquo;Allow these inbound sources only.&amp;rdquo;
This ensures that only your application server can connect to the database.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://www.cogini.com/images/blog/restrict-inbound-connections.png" /&gt;&lt;/p&gt;
&lt;h3&gt;Create app databases and users&lt;/h3&gt;
&lt;p&gt;If you are creating a database per app, you can use the &lt;code&gt;defaultdb&lt;/code&gt;
database and &lt;code&gt;doadmin&lt;/code&gt; user that the setup wizard created for you. However, it
is better to &lt;a href="/blog/multiple-databases-with-digital-ocean-managed-databases-service/"&gt;create a separate database and database user
for each app environment&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Configuration&lt;/h2&gt;
&lt;p&gt;There are four kinds of things that we may want to configure:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Static data, e.g. file paths. This is the same for all machines.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Settings specific to the environment, e.g. the hostname of the db server.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Secrets such as db passwords, API keys or TLS certificates.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Dynamic attributes such as the IP addresses of the server or other
   machines in the cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In a simple deployment, you can put all the configuration in config files like
&lt;code&gt;config/prod.exs&lt;/code&gt;. It will then be compiled into the release package.&lt;/p&gt;
&lt;p&gt;When building on a different machine, e.g. a CI/CD system, it's more secure to
keep secrets separate from the release and load them at runtime.  Similarly, we
may want to build a release and run it in a test environment, then deploy it in
production.&lt;/p&gt;
&lt;p&gt;In these cases, we keep the config outside the release file and load it at
runtime. We might read it from environment variables or external config files.
Or we might read it from a system like AWS Systems Manager Parameter Store.
See "&lt;a href="/blog/best-practices-for-deploying-elixir-apps/"&gt;Best practices for deploying Elixir apps&lt;/a&gt;"
for more details.&lt;/p&gt;
&lt;h2&gt;Building&lt;/h2&gt;
&lt;p&gt;On your build machine, build the app by running the build script:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;bin/build
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In addition to the normal Phoenix build steps, this command sets up the deploy scripts by
running the following &lt;code&gt;mix_systemd&lt;/code&gt; and &lt;code&gt;mix_deploy&lt;/code&gt; commands:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="c1"&gt;# NOTE: run by bin/build&lt;/span&gt;
mix&lt;span class="w"&gt; &lt;/span&gt;systemd.init
mix&lt;span class="w"&gt; &lt;/span&gt;systemd.generate

mix&lt;span class="w"&gt; &lt;/span&gt;deploy.init
mix&lt;span class="w"&gt; &lt;/span&gt;deploy.generate
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The configuration is minimal. We just change the name of the OS user that the
app runs under to &lt;code&gt;app&lt;/code&gt; in &lt;code&gt;config/prod.exs&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:mix_systemd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# Run db migrations before starting the app&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# exec_start_pre: [&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;#   [:deploy_dir, &amp;quot;/bin/deploy-migrate&amp;quot;]&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# ],&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;app_user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;app&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;app_group&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;app&amp;quot;&lt;/span&gt;

&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:mix_deploy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# Generate runtime scripts from templates&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;templates&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Systemd wrappers&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;start&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;stop&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;restart&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;enable&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# System setup&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;create-users&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;create-dirs&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;set-perms&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Local deploy&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;init-local&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;copy-files&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;release&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;rollback&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# DB migrations&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;migrate&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;app_user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;app&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;app_group&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;app&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Prepare runtime configuration&lt;/h2&gt;
&lt;p&gt;Edit &lt;code&gt;config/prod.exs&lt;/code&gt; with the runtime settings:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Config&lt;/span&gt;

&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:mix_deploy_example&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;MixDeployExample.Repo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;doadmin&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;CHANGEME&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;database&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;defaultdb&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;db-postgresql-sfo2-xxx-do-user-yyy-0.db.ondigitalocean.com&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;25060&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;pool_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;

&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:mix_deploy_example&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;MixDeployExampleWeb.Endpoint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;secret_key_base&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;CHANGEME2&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can generate a unique value for &lt;code&gt;secret_key_base&lt;/code&gt; using this command:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;phx.gen.secret
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Build&lt;/h2&gt;
&lt;p&gt;Build the app and make a release:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;bin/build
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Initialize local system&lt;/h2&gt;
&lt;p&gt;Run this once to set up the system for the app, creating users and directories for
releases, runtime configuration, etc.:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;bin/deploy-init-local
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;As this script changes group membership, you should log out and in again to reload user privileges.&lt;/p&gt;
&lt;h2&gt;Deploy the app&lt;/h2&gt;
&lt;p&gt;Deploy the release to the local machine:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="c1"&gt;# Extract release to target directory, creating current symlink&lt;/span&gt;
bin/deploy-release

&lt;span class="c1"&gt;# Run database migrations&lt;/span&gt;
bin/deploy-migrate

&lt;span class="c1"&gt;# Restart the systemd unit&lt;/span&gt;
sudo&lt;span class="w"&gt; &lt;/span&gt;bin/deploy-restart
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Check that it works&lt;/h2&gt;
&lt;p&gt;Make a request to the server:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;curl&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;http://localhost:4000/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can get a console on the running release:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;-i&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;app&lt;span class="w"&gt; &lt;/span&gt;/srv/mix-deploy-example/bin/deploy-remote-console
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can also have a look at the logs:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;systemctl&lt;span class="w"&gt; &lt;/span&gt;status&lt;span class="w"&gt; &lt;/span&gt;mix-deploy-example
sudo&lt;span class="w"&gt; &lt;/span&gt;journalctl&lt;span class="w"&gt; &lt;/span&gt;-r&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;mix-deploy-example
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can roll back the release with the following:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;bin/deploy-rollback
sudo&lt;span class="w"&gt; &lt;/span&gt;bin/deploy-restart
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Configure the server to listen on port 80&lt;/h2&gt;
&lt;p&gt;Listening on port 4000 might be fine if it's behind a load balancer,
otherwise we need to make the app available on port 80. There are two
ways to do this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Port forwarding using &lt;code&gt;iptables&lt;/code&gt; (see
  &lt;a href="/blog/port-forwarding-with-iptables/"&gt;Port forwarding with iptables&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Reverse proxy using a web server, (see
  &lt;a href="/blog/serving-your-phoenix-app-with-nginx/"&gt;Serving your Phoenix app with Nginx&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After you complete this step, you should be able to access your website
in the browser by navigating to your droplet's public IP address.&lt;/p&gt;
&lt;h2&gt;SSL&lt;/h2&gt;
&lt;p&gt;The steps necessary to get SSL set up with your Phoenix application depend on
the approach that you took in the previous step. If you are forwarding ports
using &lt;code&gt;iptables&lt;/code&gt;, then you should set up SSL in your application's endpoint,
as described in &lt;a href="https://hexdocs.pm/phoenix/endpoint.html#using-ssl"&gt;Phoenix docs&lt;/a&gt;.
You can get an SSL certificate for free from &lt;a href="https://letsencrypt.org/"&gt;Let's Encrypt&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you are running behind an Nginx reverse proxy, you should instead set up SSL
in Nginx. The necessary steps are described in
&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-18-04"&gt;Digital Ocean's tutorials&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;How to prepare your Phoenix app for deployment&lt;/h1&gt;
&lt;p&gt;Following are the steps used to set up this repo. You can do the same to add
it to your own project. This repo is built as a series of git commits, so you
can see how it works step by step.&lt;/p&gt;
&lt;h2&gt;Generate Phoenix project&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;phx.new&lt;span class="w"&gt; &lt;/span&gt;your_app
mix&lt;span class="w"&gt; &lt;/span&gt;deps.get
&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;assets&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;npm&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;node&lt;span class="w"&gt; &lt;/span&gt;node_modules/webpack/bin/webpack.js&lt;span class="w"&gt; &lt;/span&gt;--mode&lt;span class="w"&gt; &lt;/span&gt;development
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Add &lt;code&gt;mix.lock&lt;/code&gt; to git&lt;/li&gt;
&lt;li&gt;Add &lt;code&gt;package-lock.json&lt;/code&gt; to git&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Configure releases&lt;/h2&gt;
&lt;p&gt;Elixir 1.9 has built in support for creating releases. For earlier versions, use the
&lt;a href="https://github.com/bitwalker/distillery"&gt;Distillery&lt;/a&gt; library.&lt;/p&gt;
&lt;p&gt;Generate initial config files in the &lt;code&gt;rel&lt;/code&gt; dir:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;release.init
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Check the &lt;code&gt;rel&lt;/code&gt; directory into git.&lt;/p&gt;
&lt;p&gt;In &lt;code&gt;mix.exs&lt;/code&gt;, tell Mix not to include Windows executables in releases. In the main project
configuration, add the option &lt;code&gt;:releases&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;project&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:mix_deploy_example&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;0.1.0&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;elixir&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;~&amp;gt; 1.9&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;elixirc_paths&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;elixirc_paths&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Mix&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;compilers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:phoenix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:gettext&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Mix&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;compilers&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;start_permanent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Mix&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:prod&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;aliases&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;aliases&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;deps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;deps&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# add this line:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;releases&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;releases&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Then, in the same file, add a private function that returns your project's release
configuration:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="kd"&gt;defp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;releases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# change this to your application name&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;mix_deploy_example&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="ss"&gt;include_executables_for&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:unix&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Tune the OS for performance&lt;/h3&gt;
&lt;p&gt;For optimal performance, it is recommended that you increase the default limits for
open TCP ports and file handles. Follow the instructions in the post
&lt;a href="/blog/tuning-tcp-ports-for-your-elixir-app/"&gt;Tuning TCP ports for your Elixir app&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Add runtime config files&lt;/h2&gt;
&lt;p&gt;Loading runtime configuration from Elixir source files in Elixir 1.9 is very straightforward.
The file &lt;code&gt;config/releases.exs&lt;/code&gt; is copied into your release and evaluated at startup, and you
can load your Elixir configuration file from within that file. Edit &lt;code&gt;config/releases.exs&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Config&lt;/span&gt;
&lt;span class="n"&gt;import_config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;/etc/mix-deploy-example/config.exs&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Create a &lt;code&gt;config/prod.secret.exs.sample&lt;/code&gt; that you can use to generate production
configuration files on the build server:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Config&lt;/span&gt;

&lt;span class="c1"&gt;# Change these identifiers to ones specific to your application&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:mix_deploy_example&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;MixDeployExample.Repo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;CHANGEME&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;CHANGEME&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;database&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;CHANGEME&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;CHANGEME&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5432&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;pool_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;

&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:mix_deploy_example&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;MixDeployExampleWeb.Endpoint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;secret_key_base&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;CHANGEME2&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Add migrator module (optional)&lt;/h2&gt;
&lt;p&gt;In order for the &lt;code&gt;bin/deploy-migrate&lt;/code&gt; script to work properly, you need to add a migrator
module to your project. The instructions on how to do so are described in the post
&lt;a href="/blog/running-ecto-migrations-in-a-release/"&gt;Running Ecto migrations in a release&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If your build server and production server are the same machine, you can also skip this
step and just run your migrations with &lt;code&gt;MIX_ENV=prod mix ecto.migrate&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Set up ASDF&lt;/h2&gt;
&lt;p&gt;Create a &lt;code&gt;.tool-versions&lt;/code&gt; file in the root of your project, describing the versions
of OTP, Elixir, and Node that you will be building with:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;erlang 21.3
elixir 1.9.0
nodejs 10.16.0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Install mix_deploy and mix_systemd&lt;/h2&gt;
&lt;p&gt;Add libraries to deps from Hex:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:mix_systemd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;~&amp;gt; 0.5.0&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:mix_deploy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;~&amp;gt; 0.5.0&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Or from GitHub:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:mix_systemd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;github&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;cogini/mix_systemd&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;override&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:mix_deploy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;github&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;cogini/mix_deploy&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Add &lt;code&gt;rel/templates&lt;/code&gt; and &lt;code&gt;bin/deploy-*&lt;/code&gt; to &lt;code&gt;.gitignore&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;/rel/templates&amp;#39;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&amp;gt;&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;.gitignore
&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;/bin/deploy-*&amp;#39;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&amp;gt;&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;.gitignore
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Copy build and utility scripts into your repo&lt;/h2&gt;
&lt;p&gt;Copy shell scripts from the &lt;code&gt;bin/&lt;/code&gt; directory of the &lt;code&gt;mix-deploy-example&lt;/code&gt;
repo to the &lt;code&gt;bin/&lt;/code&gt; directory of your project.&lt;/p&gt;
&lt;p&gt;These scripts build your release or install the required dependencies:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;build&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;build-install-asdf&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;build-install-asdf-deps-centos&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;build-install-asdf-deps-ubuntu&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;build-install-asdf-init&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;build-install-asdf-macos&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;build-install-deps-centos&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;build-install-deps-ubuntu&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This script verifies that your application is running correctly:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;validate-service&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Check these scripts into git.&lt;/p&gt;
&lt;h2&gt;Configure for running in a release&lt;/h2&gt;
&lt;p&gt;In &lt;code&gt;config/prod.exs&lt;/code&gt;, uncomment or add this line so that Phoenix can run correctly in a release:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:phoenix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:serve_endpoints&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In the same file, configure &lt;code&gt;mix_deploy&lt;/code&gt; and &lt;code&gt;mix_systemd&lt;/code&gt; to run your application
as the &lt;code&gt;app&lt;/code&gt; user. This step is mandatory:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:mix_deploy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;app_user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;app&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;app_group&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;app&amp;quot;&lt;/span&gt;

&lt;span class="c1"&gt;# Minimal&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:mix_systemd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;app_user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;app&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;app_group&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;app&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Still in &lt;code&gt;prod.exs&lt;/code&gt;, configure your application's endpoint to fetch port number from environment
variables. The corresponding variable will be set by &lt;code&gt;systemd&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:your_app_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;YourAppNameWeb.Endpoint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:inet6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;PORT&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;||&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4000&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Confirm that everything compiles by building the app:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;deps.get
mix&lt;span class="w"&gt; &lt;/span&gt;deps.compile
mix&lt;span class="w"&gt; &lt;/span&gt;compile
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You should be able to run the app locally with:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="c1"&gt;# Create development database&lt;/span&gt;
mix&lt;span class="w"&gt; &lt;/span&gt;ecto.create

&lt;span class="c1"&gt;# Compile assets with production settings&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;assets&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;npm&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;npm&lt;span class="w"&gt; &lt;/span&gt;run&lt;span class="w"&gt; &lt;/span&gt;deploy&lt;span class="o"&gt;)&lt;/span&gt;

mix&lt;span class="w"&gt; &lt;/span&gt;phx.server
open&lt;span class="w"&gt; &lt;/span&gt;http://localhost:4000/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If everything seems to work, you can proceed with deployment just like you did
with the &lt;code&gt;mix-deploy-example&lt;/code&gt; sample application.&lt;/p&gt;</content><category term="DevOps"/><category term="elixir"/><category term="phoenix"/></entry><entry><title>Deploying complex apps to AWS with Terraform, Ansible, and Packer</title><link href="https://www.cogini.com/blog/deploying-complex-apps-to-aws-with-terraform-ansible-and-packer/" rel="alternate"/><published>2020-01-11T00:00:00+08:00</published><updated>2020-01-11T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2020-01-11:/blog/deploying-complex-apps-to-aws-with-terraform-ansible-and-packer/</id><summary type="html">&lt;p&gt;Recently we helped a client migrate a set of complex Ruby on Rails applications
to AWS, deploying across multiple environments and regions.&lt;/p&gt;
&lt;p&gt;They have a half-dozen SaaS products which they have built over the last
decade. They had been running them on a set of shared physical servers, with
lots …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Recently we helped a client migrate a set of complex Ruby on Rails applications
to AWS, deploying across multiple environments and regions.&lt;/p&gt;
&lt;p&gt;They have a half-dozen SaaS products which they have built over the last
decade. They had been running them on a set of shared physical servers, with
lots of weird undocumented relationships between components.&lt;/p&gt;
&lt;p&gt;There were problems with reliability and performance, as well as overall
complexity. They were deploying using Capistrano to push releases directly to
the production servers, so they didn't have an enforced release process between
dev, staging and production environments.&lt;/p&gt;
&lt;p&gt;The big driver for improvement, however, was the need to separate customer data
by country/region. GDPR compliance is easier if they keep European data hosted
in Europe. Customers in China were also complaining about poor network
performance crossing the "Great Firewall".&lt;/p&gt;
&lt;p&gt;They needed multiple environments for each app: dev, staging, production in
US/Canada/EU/China, plus a "demo" environment where customers can try
out the app in a sandbox.&lt;/p&gt;
&lt;p&gt;They needed a controlled release process with automated tests and ability to
roll back in case of problems. They had heavy load during certain parts of the
year, and underutilization the rest of the time, so they needed autoscaling.
They needed robust monitoring, metrics and alerting.&lt;/p&gt;
&lt;h1&gt;The solution&lt;/h1&gt;
&lt;p&gt;They had tried to containerize their apps, but after months of poor progress, they
gave up. It was too disruptive to their development team. They had to make a lot
of changes at once, and everyone was having to become deployment experts.
We came in and designed a new system which required minimum changes to their
apps and workflow while solving their deployment issues.&lt;/p&gt;
&lt;p&gt;One of the most important design considerations was handling all their apps
and environments with a common framework. If there are too many special cases,
the system becomes unmanageable.  It's a false economy to optimize one
environment with custom code, but increase the cost of managing the overall
system. It is better to have a standard template with configuration options. On
the other hand, we can't have too much configurability, it must be "opinionated".&lt;/p&gt;
&lt;p&gt;The apps are all similar, following the standard structure for large Rails
apps: front end web, background job processing, Redis or Memcached for caching,
Elasticsearch, MySQL or PostgreSQL. They need to run slightly differently in
dev, staging, demo and multiple production environments.  AWS China in
particular has many differences from standard AWS due to missing services and
lack of encryption.&lt;/p&gt;
&lt;p&gt;They needed an automated CI/CD pipeline, running unit tests before release,
standard deployment and rollback, production monitoring and centralized
logging.&lt;/p&gt;
&lt;p&gt;With a good pipeline, developers mostly don't care about deploy time, it
happens in the background. Time to get a change deployed can become very
important, however, when dealing with production problems. If each iteration
takes a half hour to deploy, you are going to make a bad day even worse.&lt;/p&gt;
&lt;p&gt;Running in China, with a very slow connection to the outside world, means that
we have to cache things like gems and OS packages locally, rather than
downloading them every time from the network. Otherwise it can take ages to
build an AMI.&lt;/p&gt;
&lt;h1&gt;Architecture&lt;/h1&gt;
&lt;h2&gt;Multiple AWS accounts&lt;/h2&gt;
&lt;p&gt;Running multiple apps in the same environment requires additional layers of
abstraction and configuration in your automation, making things more complex.&lt;/p&gt;
&lt;p&gt;AWS accounts are free, so we use
&lt;a href="https://aws.amazon.com/organizations/"&gt;AWS Organizations&lt;/a&gt; to set up a master AWS
account for billing, then an AWS account per environment (dev, staging, prod).
Next we create a VPC per app in each environment. That removes layers, making
the deployment scripts simple and consistent between the different
environments.&lt;/p&gt;
&lt;p&gt;If we have the same ops team handling prod for multiple apps, then we can
put all the prod VPCs in the same AWS account. For extra security, we can
easily make a separate AWS account per app + environment. The scripts stay the
same, we just change the &lt;code&gt;AWS_PROFILE&lt;/code&gt; to point to the right place.&lt;/p&gt;
&lt;h2&gt;Sharing resources between accounts&lt;/h2&gt;
&lt;p&gt;The downside to having multiple AWS accounts is that we need permissions to
share resources across accounts. While we can do this with IAM, it may be
better to duplicate work to improve security, reduce coupling, and keep the
configuration consistent.&lt;/p&gt;
&lt;p&gt;For example, if we are managing the DNS domain for the app in Route53 in the
prod account, then scripts in dev need cross-account permissions to create host
entries.  Instead, we can use a different domain per environment, e.g.
&lt;code&gt;example.com&lt;/code&gt; for production, &lt;code&gt;example-dev.com&lt;/code&gt; for development. This is more
secure and keeps mistakes from affecting production. It also makes it easy to
use consistent subdomains for e.g. &lt;code&gt;api.example.com&lt;/code&gt; or per-customer
subdomains.&lt;/p&gt;
&lt;p&gt;Similarly, we could build an AMI in one environment and use it everywhere, but
it may be better to just build it once per env. There are definitely good
reasons for having immutable artifacts, running exactly the same AMI in QA and
prod, but saving resources is not the main motivation.&lt;/p&gt;
&lt;h2&gt;Structure&lt;/h2&gt;
&lt;p&gt;Following is a standard AWS structure:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://www.cogini.com/images/blog/aws-full.png" alt="AWS VPC structure" width="100%"/&gt;&lt;/p&gt;
&lt;p&gt;The app runs in a Virtual Private Cloud (&lt;a href="https://aws.amazon.com/vpc/"&gt;VPC&lt;/a&gt;)
across multiple Availability Zones
(&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html"&gt;AZs&lt;/a&gt;)
for high availability. The VPC is split into two networks, public and private.&lt;/p&gt;
&lt;p&gt;All the data is in the private network, with controlled access. All traffic
from the outside world goes through the Application Load Balancer
(&lt;a href="https://aws.amazon.com/elasticloadbalancing/"&gt;ALB&lt;/a&gt;), which proxies HTTP
requests to the application instances.&lt;/p&gt;
&lt;p&gt;The app runs in an Auto Scaling Group
(&lt;a href="https://aws.amazon.com/autoscaling/"&gt;ASG&lt;/a&gt;), which dynamically changes the
number of running instances according to load. It also easily ensures high
availability, because it will automatically start instances in different data
centers (AZs) in case of problems.&lt;/p&gt;
&lt;p&gt;We use the AWS Relational Database Service (&lt;a href="https://aws.amazon.com/rds/"&gt;RDS&lt;/a&gt;)
service for the database, which handles high availability across multiple AZs.
Similarly, we can use &lt;a href="https://aws.amazon.com/elasticsearch-service/"&gt;Elasticsearch&lt;/a&gt; for
full text search and &lt;a href="https://aws.amazon.com/elasticache/"&gt;Elasticache&lt;/a&gt; for caching.
Simple Storage Service (&lt;a href="https://aws.amazon.com/s3/"&gt;S3&lt;/a&gt;) stores application file data.&lt;/p&gt;
&lt;p&gt;Bastion hosts allow users to remotely access instances in the private subnet.
Users connect via ssh to the bastion, which forwards the connection to back end servers.
This function can now be handled better by Amazon's new &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html"&gt;AWS Systems Manager Session
Manager&lt;/a&gt;
service.&lt;/p&gt;
&lt;p&gt;Similarly, the &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html"&gt;NAT GW&lt;/a&gt;
allows application servers in the private network to connect to services on the
public network, e.g. partner APIs.&lt;/p&gt;
&lt;p&gt;DevOps servers are EC2 instances used for deployment and management functions
within the VPC, e.g. running Jenkins CI or ops tools like Ansible. In this
case, we used the DevOps server to help ease the transition to the cloud,
handling deployment processes. As we automated the application deployment, we
switched to AWS CodeBuild.&lt;/p&gt;
&lt;h2&gt;Meeting legacy apps half way&lt;/h2&gt;
&lt;p&gt;Ideally, modern cloud applications do not store data on the app server. They
keep all application data in a separate shared location, either RDS database or
S3 file store. This allows us to automatically start and stop multiple
instances according to load or as part of the deployment process.&lt;/p&gt;
&lt;p&gt;Making complex legacy apps do this can be a lot of work, though. For this
client, we used &lt;a href="https://aws.amazon.com/efs/"&gt;EFS&lt;/a&gt; to allow apps to share
temporary files between servers. One server can write a file and another server
can read it back on a subsequent request. This allows the application to use
temp files as if it was only running on one server, without needing to make
user sessions "sticky" to one server.&lt;/p&gt;
&lt;h2&gt;Deploying with CodeDeploy and Capistrano&lt;/h2&gt;
&lt;p&gt;The client originally deployed using Capistrano, pushing code directly to
production systems via ssh. That doesn't work when there are multiple servers
in an ASG, though, as we would need to push to all of them. It's also fragile
and uncoordinated, so we risk having the system in an inconsistent state
during a deploy or rollback.&lt;/p&gt;
&lt;p&gt;The continuous integration / continuous deployment (CI/CD) system automates the
process of building and deploying code. It watches the code repository for
changes, runs tests, builds releases and automatically deploys them to
production. It monitors the success of the deployment, rolling it back in case
of problems.&lt;/p&gt;
&lt;p&gt;In this case, just getting the app building in CI/CD was a big task, so we
first implemented a hybrid system. Developers continued to use Capistrano to
deploy the app using &lt;code&gt;cap deploy&lt;/code&gt;. Instead of deploying directly to the prod
server, however, they pushed code to a DevOps server. At the last step, custom
rake tasks packaged the code and turned it into a
&lt;a href="https://aws.amazon.com/codedeploy/"&gt;CodeDeploy&lt;/a&gt; release, which we deployed
into production.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://aws.amazon.com/quickstart/architecture/blue-green-deployment/"&gt;Blue / Green
deployment&lt;/a&gt;
makes systems more reliable by taking advantage of how easy it is to start
temporary servers in the cloud. Instead of updating code on existing servers,
we can start new servers and deploy new code to them. Once we are sure it works, we
switch traffic to the new servers and shut down the old ones.&lt;/p&gt;
&lt;h2&gt;Encryption&lt;/h2&gt;
&lt;p&gt;Applications with sensitive user data such as health care and financial services
require high security, and encryption has become a baseline requirement for all
applications. In this design, we used encryption everywhere: all data at rest
is encrypted, and all data in transit is encrypted. That means turning on
encryption for S3, RDS and EBS disk volumes, and using SSL on the ALB for
external traffic, between the ALB and the app, and between the app and RDS and
Elasticsearch.&lt;/p&gt;
&lt;h1&gt;Show me the code!&lt;/h1&gt;
&lt;p&gt;Here is a complete
&lt;a href="https://github.com/cogini/multi-env-deploy"&gt;example of deploying an app to AWS&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It supports multiple components, e.g. web front end, background job handler,
periodic jobs, a separate server to handle API traffic or web sockets
connections. It uses RDS for database, Redis or Memcached, Elasticsearch, CDN
for static assets, SSL, S3 buckets, encryption.&lt;/p&gt;
&lt;p&gt;The app can run in an autoscaling group and use a CI/CD pipeline to handle blue/green
deployment. It supports multiple environments: dev, staging, prod, demo, with
slight differences for each.&lt;/p&gt;
&lt;p&gt;It's built in in a modular way using Terraform, Ansible and Packer. We have
used it to deploy multiple complex apps, so it handles many things that you
will need, but it's also flexible enough to be tweaked when necessary for
special requirements. It represents months of work.&lt;/p&gt;
&lt;p&gt;These modules cover the following scenarios:&lt;/p&gt;
&lt;h2&gt;Minimal EC2 + RDS&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;VPC with public, private and database subnets&lt;/li&gt;
&lt;li&gt;App runs in EC2 instance(s) in the public subnet&lt;/li&gt;
&lt;li&gt;RDS database&lt;/li&gt;
&lt;li&gt;Route53 DNS with health checks directs traffic to app&lt;/li&gt;
&lt;li&gt;Data stored in S3&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is good for a simple app, and is also a stepping stone when deploying
more complex apps. The EC2 instance can be used for development or as a canary
instance.&lt;/p&gt;
&lt;h2&gt;CloudFront for assets&lt;/h2&gt;
&lt;p&gt;Store app assets like JS and CSS in CloudFront for performance&lt;/p&gt;
&lt;h2&gt;CodePipeline for CI/CD&lt;/h2&gt;
&lt;p&gt;Whenever code changes, pull from git, build in CodeBuild, run tests and deploy
automatically using CodeDeploy. Run tests against resources such
as RDS or Redis.&lt;/p&gt;
&lt;h2&gt;Auto Scaling Group and Load Balancer&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;App runs in an ASG in the private VPC subnet&lt;/li&gt;
&lt;li&gt;Blue/Green deployment&lt;/li&gt;
&lt;li&gt;SSL using Amazon Certificate Manager&lt;/li&gt;
&lt;li&gt;Spot instances to reduce cost&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Worker ASG&lt;/h2&gt;
&lt;p&gt;Worker runs background tasks in an ASG, with its own build and deploy pipeline.&lt;/p&gt;
&lt;h2&gt;Multiple front end apps&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Load Balancer routes traffic between multiple front end apps&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Shared S3 buckets&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Share data between S3 buckets&lt;/li&gt;
&lt;li&gt;Use signed URLs to handle protected user content&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Static website&lt;/h2&gt;
&lt;p&gt;Build the public website using a static site generator in CodeBuild, deploying
to CloudFront CDN. Use Lambda@Edge to rewrite URLs.&lt;/p&gt;
&lt;h2&gt;Elasticache&lt;/h2&gt;
&lt;p&gt;Add Elasticache Redis or Memcached for app caching.&lt;/p&gt;
&lt;h2&gt;Elasticsearch&lt;/h2&gt;
&lt;p&gt;Add Elasticsearch for the app.&lt;/p&gt;
&lt;h2&gt;DevOps&lt;/h2&gt;
&lt;p&gt;Add a DevOps instance to handle deployment and management tasks.&lt;/p&gt;
&lt;h2&gt;Bastion host&lt;/h2&gt;
&lt;p&gt;Add Bastion host to control access to servers in the private subnet.
Or use with AWS SSM Sessions.&lt;/p&gt;
&lt;h2&gt;Prometheus metrics&lt;/h2&gt;
&lt;p&gt;Add Prometheus for application metrics and monitoring&lt;/p&gt;
&lt;h2&gt;SES&lt;/h2&gt;
&lt;p&gt;Use SES for email.&lt;/p&gt;
&lt;h1&gt;How it works&lt;/h1&gt;
&lt;p&gt;It uses &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; to create the infrastructure,
&lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt; and &lt;a href="https://www.packer.io/"&gt;Packer&lt;/a&gt; to set
up instances and AMIs. It uses AWS CodePipeline/CodeBuild/CodeDeploy to build
and deploy code, running the app components in one or more autoscaling groups
running EC2 instances.&lt;/p&gt;
&lt;p&gt;The base of the system is Terraform and &lt;a href="https://github.com/gruntwork-io/terragrunt"&gt;Terragrunt&lt;/a&gt;.
Common Terraform modules can be enabled according to the specific application
requirements. Similarly, it uses common Ansible playbooks which can be modified
for specific applications. If an app needs something special, we can easily add a
custom module for it.&lt;/p&gt;
&lt;p&gt;We use the following terminology:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Apps are under an &lt;code&gt;org&lt;/code&gt;, or organization, e.g. a company. &lt;code&gt;org_unique&lt;/code&gt; is
  a globally unique identifier, used to name e.g. S3 buckets&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An &lt;code&gt;env&lt;/code&gt; is an environment, e.g. dev, stage, or prod. Each gets its own
  AWS account&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An &lt;code&gt;app&lt;/code&gt; is a single shared set of data, potentially accessed by multiple
  front end interfaces and back end workers. Each app gets it's own VPC.
  A separate VPC, generally one per environment, handles logging and monitoring
  using ELK and Prometheus&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A &lt;code&gt;comp&lt;/code&gt; is an application component&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We have three standard types of components: web app, worker and cron.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Web apps&lt;/strong&gt; process external client requests. Simple apps consist of only a single
web app, but complex apps may have more, e.g. an API server, admin interface or
instance per customer.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workers&lt;/strong&gt; handle asynchronous background processing driven by a job queue
such as Sidekiq, SQS or a Kafka stream. They make the front end more responsive
by offloading long running tasks. The number of worker instances in the ASG
depends on the load.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Cron&lt;/strong&gt; servers handle timed batch workloads, e.g. periodic jobs. From a
provisioning perspective, there is not much difference between a worker and a
cron instance, except that cron instances are expected to always be running so
that they can schedule jobs.  Generally speaking, we prefer to move periodic
tasks to Lambda functions where possible.&lt;/p&gt;
&lt;p&gt;We normally run application components in an auto scaling group, allowing
them to start and stop according to load. This also provides high availability,
as the ASG will start instances in a different availability zone if they die.
This makes it useful even if we normally only have one instance running.&lt;/p&gt;
&lt;p&gt;Running in an ASG requires that instances start from a "template" image AMI and
be stateless, storing their data in S3 or RDS. We can also run components in
standalone EC2 instances, useful for development and earlier in the process of
migrating the app to the cloud.&lt;/p&gt;
&lt;p&gt;We can also deploy the app to containers via ECS as part of the same system.
Everything is tied together with a common ALB, so it's just a question of
routing traffic.&lt;/p&gt;
&lt;p&gt;When possible, we utilize managed AWS services such as RDS, ElastiCache, and
Elasticsearch. When managed services lack functionality, are immature or are
expensive at high load, we can run our own.&lt;/p&gt;
&lt;p&gt;The system makes use of CloudFront to host application assets as well as static
content websites or "JAM stack" apps using tools like
&lt;a href="https://www.gatsbyjs.org/"&gt;Gatsby&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We deploy the application using AWS CodeDeploy using a blue/green deployment
strategy. The CodeDeploy releases can be built using CodePipeline or a DevOps EC2 instance.&lt;/p&gt;
&lt;p&gt;By default we use Route53 for DNS and ACM for certificates, though it can
work with external DNS, certs and other CDNs like CloudFlare.&lt;/p&gt;
&lt;h2&gt;Terraform structure&lt;/h2&gt;
&lt;p&gt;Using Terragrunt, we separate the configuration into common modules,
app configuration and environment-specific variables.&lt;/p&gt;
&lt;p&gt;Under the &lt;code&gt;terraform&lt;/code&gt; directory is the &lt;code&gt;modules&lt;/code&gt; directory and a directory for
each app, e.g. &lt;code&gt;foo&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;terraform
    modules
    foo
    bar
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;For many apps, the recommended Terragrunt structure in &lt;a href="https://terragrunt.gruntwork.io/use-cases/keep-your-terraform-code-dry/"&gt;Keep your Terraform
code DRY&lt;/a&gt;
and &lt;a href="https://github.com/gruntwork-io/terragrunt-infrastructure-live-example"&gt;example&lt;/a&gt;
works fine. It uses a directory hierarchy like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;aws&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;account&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="n"&gt;resources&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;e.g.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;dev
    stage
        us-east-1
            asg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In this case, we may have multiple prod environments in different regions, each
potentially with its own AWS account. We use a flatter structure combined with
environment vars which determine which config vars to load.&lt;/p&gt;
&lt;p&gt;Under the app directory are:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;terragrunt.hcl
common.yml
dev.yml
prod.yml
dev
prod
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;terragrunt.hcl&lt;/code&gt; is the top level config file. It loads configuration from YAML
files based on the environment, starting with common settings in &lt;code&gt;common.yml&lt;/code&gt;
and overriding them based on the environment, e.g. &lt;code&gt;dev.yml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Configure &lt;code&gt;common.yml&lt;/code&gt; to name the app you are building, e.g. &lt;code&gt;org&lt;/code&gt;, &lt;code&gt;app&lt;/code&gt;
and set the region it will run in.&lt;/p&gt;
&lt;p&gt;Next configure the resources for the environment, e.g. &lt;code&gt;dev&lt;/code&gt;.  Each resource
has a directory which defines its name and a &lt;code&gt;terragrunt.hcl&lt;/code&gt; which sets
dependencies and variables.&lt;/p&gt;
&lt;p&gt;Dirs for each environment define which modules will be used.
For example, this defines a single web app ASG behind a public load balancer,
SSL cert, Route53 domain, RDS database, CodePipeline building in a custom
container image, deploying with CodeDeploy, using KMS encryption keys:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;acm-public
asg-app
codedeploy-app
codedeploy-deployment-app-asg
codepipeline-app
ecr-build-app
iam-codepipeline
iam-codepipeline-app
iam-instance-profile-app
iam-s3-request-logs
kms
launch-template-app
lb-public
rds-app
route53-delegation-set
route53-public
route53-public-www
s3-app
s3-codepipeline-app
s3-request-logs
sg-app-private
sg-db
sg-lb-public
sns-codedeploy-app
target-group-default
vpc
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Modules are named by the AWS component plus a component-name suffix, e.g. &lt;code&gt;asg-api&lt;/code&gt;
for an autoscaling group for a web component handling API requests. Each
component is a directory and a Terragrunt config file specifying the module and
any necessary variables. For example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;terraform&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;${get_terragrunt_dir()}/../../../modules//asg&amp;quot;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;dependency&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;vpc&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;config_path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;../vpc&amp;quot;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;dependency&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;lt&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;config_path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;../launch-template-api&amp;quot;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;dependency&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;tg&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;config_path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;../target-group-api&amp;quot;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;include&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;find_in_parent_folders&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nb"&gt;inputs&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;comp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;api&amp;quot;&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;min_size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;max_size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;desired_capacity&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;wait_for_capacity_timeout&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;2m&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;wait_for_elb_capacity&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;health_check_grace_period&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;30&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;health_check_type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;ELB&amp;quot;&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;target_group_arns&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;dependency.tg.outputs.arn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;subnets&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;dependency.vpc.outputs.subnets&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;private&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;launch_template_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;dependency.lt.outputs.launch_template_id&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;launch_template_version&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;$Latest&amp;quot;&lt;/span&gt;&lt;span class="c1"&gt; # $Latest, or $Default&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;spot_max_price&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;on_demand_base_capacity&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;on_demand_percentage_above_base_capacity&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;override_instance_types&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;t3a.nano&amp;quot;, &amp;quot;t3.nano&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;source&lt;/code&gt; identifies the Terraform code, in this case &lt;code&gt;asg-app&lt;/code&gt;, and the
dependencies. The second part sets variables, e.g. AMI and instance type, ASG
size and health check parameters.&lt;/p&gt;
&lt;p&gt;Outputs of one module are stored in the state, and we can then use them as
inputs for other modules. The system is flexible, using separate modules identified
by name and path. This makes it straightforward to define multiple front end or
worker components or customize modules when necessary. This is a key advantage
of Terraform over CloudFormation. When CloudFormation config gets large, it
becomes hard to manage and extend. Terraform also supports multiple providers,
not just AWS.&lt;/p&gt;
&lt;p&gt;The configuration for &lt;code&gt;dev&lt;/code&gt; is normally roughly the same as &lt;code&gt;prod&lt;/code&gt;, but
with e.g. smaller instances. It's possible, however, to have different
structure as needed.&lt;/p&gt;
&lt;h2&gt;Ansible structure&lt;/h2&gt;
&lt;p&gt;Ansible is used to set up AMIs, perform tasks like creating db users, and
generate config files in S3 buckets from templates. We may use the Ansible
vault to store secrets or put them into SSM Parameter Store.&lt;/p&gt;
&lt;p&gt;The structure generally follows the approach in "&lt;a href="https://www.cogini.com/blog/setting-ansible-variables-based-on-the-environment/"&gt;Setting Ansible variables
based on the environment&lt;/a&gt;".&lt;/p&gt;
&lt;p&gt;&lt;code&gt;playbooks&lt;/code&gt; contains common and app-specific playbooks.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;manage-users.yml
files
foo
    app-ssm.yml
    bastion.yml
    bootstrap-db-mysql.yml
    bootstrap-db-pg.yml
    bootstrap-db-ssm.yml
    config-app-https.yml
    config-app.yml
    devops.yml
    packer-app.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;files&lt;/code&gt; has common files used by the playbooks, e.g. ssh public keys used by &lt;code&gt;manage-users.yml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;vars&lt;/code&gt; contains configuration for each env. Most configuration is done here,
not in the inventory, due to the need to manage multiple environments.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;foo
    dev
        app-https.yml
        app-secrets.yml
        app.yml
        bastion.yml
        common.yml
        db-app.yml
        devops.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Following is an example playbook used to provision an AMI, &lt;code&gt;playbooks/$APP/packer-$COMP.yml&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;Install base&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;hosts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;*&amp;#39;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;become&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;true&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;vars&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;app_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;foo&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;comp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;app&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;tools_other_packages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;chrony&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="c1"&gt;# Parse cloud-init&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;jq&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="c1"&gt;# Sync config from S3&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;awscli&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;vars_files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;vars/{{ app_name }}/{{ env }}/common.yml&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;vars/{{ app_name }}/{{ env }}/app.yml&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;vars/{{ app_name }}/{{ env }}/ses.yml&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;vars/{{ app_name }}/{{ env }}/ses.vault.yml&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;vars/foo/{{ env }}/elixir-release.yml&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;common-minimal&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;tools-other&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;cogini.users&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;iptables&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;iptables-http&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;codedeploy-agent&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;cronic&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;postfix-sender&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;mesaguy.prometheus&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;postgres-client&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;cogini.elixir-release&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;It loads its config using &lt;code&gt;vars_files&lt;/code&gt; from the vars directory, then runs a
series of roles.&lt;/p&gt;
&lt;h2&gt;Packer structure&lt;/h2&gt;
&lt;p&gt;To actually build the AMI, we use Packer.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Makefile
README.md
builder
    build.sh
    build_centos.yml
    build_ubuntu.yml
foo
    dev
        build_app.sh
        build_cron.sh
        build_worker.sh
set_env.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Set OS environment vars and run:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;./&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="n"&gt;APP&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;/&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="n"&gt;ENV&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;/build_app.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This launches an EC2 instance and runs the &lt;code&gt;playbooks/foo/packer-app.yml&lt;/code&gt;
Ansible playbook to configure it. The result is an AMI ID which you put into
e.g.  the &lt;code&gt;image_id&lt;/code&gt; var in &lt;code&gt;terraform/foo/dev/launch-template-app/terragrunt.hcl&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Need help?&lt;/h2&gt;
&lt;p&gt;Need help deploying your complex app? &lt;a href="/contact/"&gt;Get in touch!&lt;/a&gt;&lt;/p&gt;</content><category term="DevOps"/><category term="terraform"/><category term="ansible"/><category term="packer"/><category term="rails"/><category term="aws"/></entry><entry><title>A new approach to deploying Elixir apps: mix_deploy</title><link href="https://www.cogini.com/blog/a-new-approach-to-deploying-elixir-apps-mix_deploy/" rel="alternate"/><published>2020-01-07T00:00:00+08:00</published><updated>2020-01-07T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2020-01-07:/blog/a-new-approach-to-deploying-elixir-apps-mix_deploy/</id><summary type="html">&lt;p&gt;A new approach to deploying Elixir apps: mix_deploy&lt;/p&gt;</summary><content type="html">&lt;p&gt;There has been a lot of action lately in Elixir to integrate releases into the
core system and improve the process of configuring the application.  We still
need, however, some way to deploy the releases into production and pull
configuration from the environment.&lt;/p&gt;
&lt;p&gt;I have written two new libraries which focus on the level below the Erlang VM.
&lt;a href="https://github.com/cogini/mix_systemd"&gt;mix_systemd&lt;/a&gt; generates a systemd unit
file to run the app, and &lt;a href="https://github.com/cogini/mix_deploy"&gt;mix_deploy&lt;/a&gt;
generates scripts to deploy the app to servers, interface with systemd and
handle the deployment lifecycle of systems like
&lt;a href="https://aws.amazon.com/codedeploy/"&gt;AWS CodeDeploy&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;They are used when deploying to virtual machines or dedicated servers, not
a container system like Kubernetes.&lt;/p&gt;
&lt;p&gt;These libraries are essentially a collection of opinionated templates which
work together: they look at the configuration of the app in &lt;code&gt;mix.exs&lt;/code&gt; and
&lt;code&gt;config/prod.exs&lt;/code&gt; and generate a systemd unit file which calls shell scripts.&lt;/p&gt;
&lt;p&gt;They also automate the process of managing config files with production
secrets, as well as runtime configuration from &lt;code&gt;cloud-init&lt;/code&gt; to configure a
cluster.&lt;/p&gt;
&lt;p&gt;The focus is on three main scenarios:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Building and deploying on the same server&lt;/li&gt;
&lt;li&gt;Building on a CI system like AWS CodeBuild and deploying to the cloud
   with a CD system like AWS CodeDeploy&lt;/li&gt;
&lt;li&gt;Building on a CI server and deploying to dedicated servers using Ansible&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The same approach scales from a $5/month Digital Ocean server to AWS running a
load balancer and autoscaling group, to fleets of dedicated servers. It
provides a simple and cheap way to get started. Elixir's excellent concurrency
support means that it works great with a small number of servers with more
cores rather than slicing things up.&lt;/p&gt;
&lt;p&gt;Documentation is here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/cogini/mix_systemd"&gt;mix_systemd&lt;/a&gt; GitHub project&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cogini/mix_deploy"&gt;mix_deploy&lt;/a&gt; GitHub project&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cogini/mix-deploy-example"&gt;A complete sample app&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;A step by step tutorial for beginners: &lt;a href="https://www.cogini.com/blog/deploying-phoenix-to-digital-ocean-with-mix-deploy/"&gt;Deploying an Elixir app to Digital Ocean with mix_deploy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cogini.com/blog/deploying-elixir-apps-with-ansible/"&gt;Deploying Elixir apps with Ansible&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cogini.com/blog/best-practices-for-deploying-elixir-apps/"&gt;Best practices for deploying Elixir apps &lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We have been using these libraries to deploy a number of production
applications. The goal is that you can simply add the libraries to your project
and they will do the right thing, while still being customizable to handle
different situations. Contributions are welcome!&lt;/p&gt;
&lt;p&gt;I am &lt;code&gt;reachfh&lt;/code&gt; on Freenode &lt;code&gt;#elixir-lang&lt;/code&gt; IRC channel, &lt;code&gt;jakemorrison&lt;/code&gt; on on the
Elixir Slack and Discord. I am in Taiwan, though, so look for me in that
timezone :-).&lt;/p&gt;</content><category term="DevOps"/><category term="elixir"/><category term="phoenix"/><category term="deployment"/></entry><entry><title>Best practices for deploying Elixir apps</title><link href="https://www.cogini.com/blog/best-practices-for-deploying-elixir-apps/" rel="alternate"/><published>2019-12-31T00:00:00+08:00</published><updated>2019-12-31T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2019-12-31:/blog/best-practices-for-deploying-elixir-apps/</id><summary type="html">&lt;p&gt;Best practices for deploying Elixir and Phoenix apps, with a working example&lt;/p&gt;</summary><content type="html">&lt;p&gt;Figuring out how to deploy your Elixir app can be confusing, as it's a bit
different from other languages. It's very mature and well thought out, though.
Once you get the hang of it, it's quite nice.&lt;/p&gt;
&lt;p&gt;Our &lt;a href="https://github.com/cogini/mix_deploy"&gt;mix_deploy&lt;/a&gt; and
&lt;a href="https://github.com/cogini/mix_systemd"&gt;mix_systemd&lt;/a&gt; libraries
help automate the process. This &lt;a href="https://github.com/cogini/mix-deploy-example"&gt;working
example&lt;/a&gt; puts
the pieces together to get you started quickly.
This post gives more background and links to advanced topics.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;Big picture, we deploy Erlang "releases" using systemd for process supervision.
We run in cloud or dedicated server instances. We build and deploy on the same
server or build on a continuous integration server and deploy using Ansible or
AWS CodeDeploy.&lt;/p&gt;
&lt;p&gt;We make healthcare and financial apps, so we are paranoid about security. We
run apps that get large amounts of traffic, so we are careful about
performance. And we deploy to the cloud, so the apps need to be stateless,
dynamically scaled under the control of a system like AWS CodeDeploy.&lt;/p&gt;
&lt;h1&gt;Locking dependency versions&lt;/h1&gt;
&lt;p&gt;The process starts in your dev environment. When you run &lt;code&gt;mix deps.get&lt;/code&gt;,
mix fetches the dependencies listed in the &lt;code&gt;mix.exs&lt;/code&gt;, but they are normally
only loosely specified, e.g. &lt;code&gt;{:plug_cowboy, "~&amp;gt; 2.0"}&lt;/code&gt; will actually install
the latest compatible version, 2.6.3.&lt;/p&gt;
&lt;p&gt;Mix records the specific versions that it fetched in the &lt;code&gt;mix.lock&lt;/code&gt; file.
Later, on the build machine, mix uses the specific package version or git
reference in the lock file to build the release.&lt;/p&gt;
&lt;p&gt;This makes a release completely predictable and reproducible. It does not
depend on the version of libraries installed on the server, and one app doesn't
affect another. It's like Ruby's &lt;code&gt;Gemfile.lock&lt;/code&gt; or Node's &lt;code&gt;package-lock.json&lt;/code&gt;
files. This locking happens automatically as part of the standard mix process,
just make sure you check the &lt;code&gt;mix.lock&lt;/code&gt; file into source control.&lt;/p&gt;
&lt;h1&gt;Managing Erlang and Elixir versions&lt;/h1&gt;
&lt;p&gt;For simple deployments, we can &lt;a href="https://github.com/cogini/mix-deploy-example/tree/master/bin"&gt;install Erlang and Elixir from binary packages&lt;/a&gt;.
Instead of using the packages that come with the OS, which are generally
out of date, use the &lt;a href="https://www.erlang-solutions.com/resources/download.html"&gt;packages from Erlang
Solutions&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;One disadvantage of OS packages is that only one version can be
installed at a time. If different projects need different versions, then we
have a conflict. Similarly, when we upgrade Erlang or Elixir, we need to first
test the code with the new version, moving it through dev and test
environments, then putting it into production. If anything goes wrong, we need
to be able to roll back quickly. To support this, we need to precisely specify
runtime versions and keep multiple versions installed so we can switch between
them.&lt;/p&gt;
&lt;p&gt;When building a release for production, Elixir is just another library
dependency as far as Erlang is concerned. We can also package the Erlang
virtual machine inside the release, so it's not necessary to install Erlang on
the prod machine globally at all. Just install the release and it includes the
matching VM.&lt;/p&gt;
&lt;p&gt;That lets us upgrade production systems with no drama. We have apps
which have been running continuously for years on clusters of servers,
upgrading through multiple Elixir and Erlang versions with no downtime.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://asdf-vm.com/"&gt;ASDF&lt;/a&gt; manages multiple versions of Erlang, Elixir and Node.js.
It is a language-independent equivalent to tools like Ruby's
&lt;a href="https://rvm.io/"&gt;RVM&lt;/a&gt; or &lt;a href="https://github.com/rbenv/rbenv"&gt;rbenv&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;.tool-versions&lt;/code&gt; file in the project root specifies the versions to use:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;erlang 22.2
elixir 1.9.4
nodejs 10.15.3
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;ASDF looks at the &lt;code&gt;.tool-versions&lt;/code&gt; file and automatically sets the path to
point to the correct version. The build script for the project runs &lt;code&gt;asdf
install&lt;/code&gt; to install the matching Erlang, Elixir and Node.js versions.&lt;/p&gt;
&lt;p&gt;See &lt;a href="https://www.cogini.com/blog/using-asdf-with-elixir-and-phoenix/"&gt;Using ASDF with Elixir and Phoenix&lt;/a&gt;
for details.&lt;/p&gt;
&lt;h1&gt;Building and testing&lt;/h1&gt;
&lt;p&gt;We normally develop on macOS and deploy to Linux. The Erlang VM mostly isolates
us from the operating system, and mix manages library dependencies tightly, so
we don't find it necessary to use Docker or Vagrant. It &lt;em&gt;is&lt;/em&gt; necessary,
however, to build the release with an Erlang VM executable that matches your target
system. You can't just build the release on macOS and use it on a Linux server.&lt;/p&gt;
&lt;p&gt;For simple projects, we &lt;a href="https://www.cogini.com/blog/deploying-an-elixir-app-to-digital-ocean-with-mix-deploy/"&gt;build on the same server that runs the
app&lt;/a&gt;: check
out the code from git, build a release, then deploy it locally running under systemd.&lt;/p&gt;
&lt;p&gt;In larger projects, a CI/CD server checks out the code, runs tests, then builds a
release. We then deploy to the cloud using &lt;a href="https://github.com/cogini/mix_deploy#codedeploy"&gt;AWS CodeDeploy&lt;/a&gt;
or &lt;a href="https://www.cogini.com/blog/deploying-elixir-apps-with-ansible/"&gt;deploy the release using Ansible&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Like your dev machine, the build server runs ASDF. When it makes a build, it
automatically uses the versions of Erlang and Elixir specified in the
&lt;code&gt;.tool-versions&lt;/code&gt; file, which is in sync with the code. These &lt;a href="https://github.com/cogini/mix-deploy-example/tree/master/bin"&gt;build
scripts&lt;/a&gt;
handle the setup and build process.&lt;/p&gt;
&lt;h1&gt;Erlang releases&lt;/h1&gt;
&lt;p&gt;The most important part of the deployment process is using Erlang "releases".
A release combines the Erlang VM, your application, and the libraries it
depends on into a tarball, which you deploy as a unit. The release has a script
to start the app, launched and supervised by the OS init system (e.g. systemd).
If it dies, the system restarts it.&lt;/p&gt;
&lt;p&gt;Releases handle a lot of the details you need to run things reliably in
production, e.g.:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Packaging&lt;/li&gt;
&lt;li&gt;Configuration&lt;/li&gt;
&lt;li&gt;Running migrations&lt;/li&gt;
&lt;li&gt;Getting a console on a running app&lt;/li&gt;
&lt;li&gt;Upgrades&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Building releases&lt;/h2&gt;
&lt;p&gt;Since Elixir 1.9, mix has &lt;a href="https://hexdocs.pm/mix/Mix.Tasks.Release.html"&gt;built in support&lt;/a&gt;
for creating releases. For earlier versions, use the
&lt;a href="https://github.com/bitwalker/distillery"&gt;Distillery&lt;/a&gt; library.&lt;/p&gt;
&lt;p&gt;Configure your release in &lt;code&gt;mix.exs&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;project&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:foo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;releases&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="ss"&gt;prod&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="ss"&gt;include_executables_for&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:unix&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="ss"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:assemble&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:tar&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;rel/vm.args.eex&lt;/code&gt; sets Erlang VM startup arguments. We normally tune it
to &lt;a href="https://www.cogini.com/blog/tuning-tcp-ports-for-your-elixir-app/"&gt;increase TCP ports&lt;/a&gt; for
high volume apps.&lt;/p&gt;
&lt;p&gt;Generate a template in your project under &lt;code&gt;rel&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;release.init
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Edit it as needed, then build the release:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nv"&gt;MIX_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prod&lt;span class="w"&gt; &lt;/span&gt;mix&lt;span class="w"&gt; &lt;/span&gt;release
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This creates a tarball with everything you need to deploy:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;_build/prod/foo-0.1.0.tar.gz
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Running database migrations&lt;/h2&gt;
&lt;p&gt;In the deployed system, we don't have mix. The release command script allows us
to call an Elixir function to
&lt;a href="https://www.cogini.com/blog/running-ecto-migrations-in-a-release/"&gt;run migrations from a release&lt;/a&gt;.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;/srv/foo/current/bin/foo&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;eval&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Foo.Release.migrate&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h1&gt;Configuration&lt;/h1&gt;
&lt;p&gt;There are four different kinds of things that we may want to configure:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Static information about application layout, e.g. file paths.
   This is the same for all machines in an environment, e.g. staging or prod.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Information specific to the environment, e.g. the hostname of the db
   server.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Secrets such as db passwords, API keys or TLS keys.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Dynamic information such as the IP address of the server or other
   machines in the cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Elixir has a couple of mechanisms for storing configuration. When you compile
the release, it converts Elixir-format config files like &lt;code&gt;config/prod.exs&lt;/code&gt;
into an initial application environment (&lt;code&gt;sys.config&lt;/code&gt;) that is read by
&lt;a href="https://hexdocs.pm/elixir/Application.html#get_env/3"&gt;Application.get_env/3&lt;/a&gt;.
That's fine for simple, relatively static apps. It's better to keep secrets
separate from the release, though.&lt;/p&gt;
&lt;p&gt;Elixir 1.9 releases support dynamic configuration at runtime. You can run the
Elixir file &lt;code&gt;config/releases.exs&lt;/code&gt; when it boots or use the shell script
&lt;code&gt;rel/env.sh.eex&lt;/code&gt; to set environment vars. With these you can theoretically do
anything. In practice, however, it can be more convenient and secure to process
the config outside of the app. That's where
&lt;a href="https://github.com/cogini/mix_systemd"&gt;mix_systemd&lt;/a&gt; and
&lt;a href="https://github.com/cogini/mix_deploy"&gt;mix_deploy&lt;/a&gt; come in.&lt;/p&gt;
&lt;h2&gt;Environment vars&lt;/h2&gt;
&lt;p&gt;The simplest way to configure your app is via OS environment variables.
You can set them via the systemd supervisor or container runtime.
Your application then calls &lt;code&gt;System.get_env/1&lt;/code&gt; in &lt;code&gt;config/releases.exs&lt;/code&gt; or
application startup. Note that these environment vars are read at &lt;em&gt;runtime&lt;/em&gt;,
not when building your app.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/cogini/mix_systemd"&gt;mix_systemd&lt;/a&gt; supports reading
environment vars from files, e.g.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;/srv/foo/etc/environment
/etc/foo/environment
/run/foo/environment
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This lets you set config defaults in the release, then override them in the
environment or at runtime.&lt;/p&gt;
&lt;h2&gt;Config providers&lt;/h2&gt;
&lt;p&gt;At a certain point, making everything into an environment var becomes annoying.
It's verbose and vars are simple strings, so you have to encode values
safely and convert them back to lists, integers or atoms.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://hexdocs.pm/elixir/Config.Provider.html"&gt;Config providers&lt;/a&gt;
load data on startup, merging it with the default application
environment before starting the VM. This lets us keep secrets
outside of the release file and change settings depending on where the app is
running. We also keep secrets out of the build environment, e.g. a shared CI
system.&lt;/p&gt;
&lt;p&gt;They support standard formats like &lt;a href="https://github.com/toml-lang/toml"&gt;TOML&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;[foo.&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Foo.Repo&amp;quot;&lt;/span&gt;&lt;span class="k"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;ecto://foo_prod:Sekrit!@db.foo.local/foo_prod&amp;quot;&lt;/span&gt;
&lt;span class="n"&gt;pool_size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;

&lt;span class="k"&gt;[foo.&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;FooWeb.Endpoint&amp;quot;&lt;/span&gt;&lt;span class="k"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;secret_key_base&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;EOdJB1T39E5Cdeebyc8naNrOO4HBoyfdzkDy2I8Cxiq4mLvIQ/0tK12AK1ahrV4y&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Add the &lt;a href="https://hexdocs.pm/toml_config/readme.html"&gt;TOML config provider&lt;/a&gt; to &lt;code&gt;mix.exs&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="kd"&gt;defp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;releases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="ss"&gt;foo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="ss"&gt;include_executables_for&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:unix&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="ss"&gt;config_providers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nc"&gt;TomlConfigProvider&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;/etc/foo/config.toml&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="ss"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:assemble&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:tar&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The startup scripts read the initial application environment compiled into the
release, parse the config file, merge the values, write it to a temp file then
start the VM. Because of that, they need a writable directory.  That is
configured using the &lt;code&gt;RELEASE_TMP&lt;/code&gt; environment var, which you can set to the app's
&lt;code&gt;runtime_dir&lt;/code&gt;, e.g. &lt;code&gt;/run/foo&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;Copying files&lt;/h3&gt;
&lt;p&gt;This config file approach is simple, but effective. The question is how to get
the environment files onto the server. When deploying a simple app on the same
server, we can just copy the &lt;code&gt;prod.secret.exs&lt;/code&gt; or environment file to &lt;code&gt;/etc/foo&lt;/code&gt;.
When deploying to dedicated servers, we can generate the config file using Ansible
and push it to the server.&lt;/p&gt;
&lt;p&gt;In cloud environments, we may run from a read-only image, e.g. an Amazon AMI,
which gets configured at start up based on the environment by copying the
config from an S3 bucket. See &lt;code&gt;deploy-sync-config-s3&lt;/code&gt; in
&lt;a href="https://github.com/cogini/mix_deploy"&gt;mix_deploy&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Config servers and vaults&lt;/h2&gt;
&lt;p&gt;You can also store config params in an external configuration system and
read them at runtime. An example is &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html"&gt;AWS Systems Manager Parameter
Store&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Set a parameter using the AWS CLI:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;aws&lt;span class="w"&gt; &lt;/span&gt;ssm&lt;span class="w"&gt; &lt;/span&gt;put-parameter&lt;span class="w"&gt; &lt;/span&gt;--name&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;/foo/prod/db/password&amp;#39;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;--type&lt;span class="w"&gt; &lt;/span&gt;‘SecureString’&lt;span class="w"&gt; &lt;/span&gt;--value&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;#39;&lt;/span&gt;Sekrit!&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;While it's possible to read params in &lt;code&gt;config/releases.exs&lt;/code&gt;, it's tedious.
Better is to grab all of them at once and write them to a file, then read it in
with a Config Provider like &lt;a href="https://github.com/caredox/aws_ssm_provider"&gt;aws_ssm_provider&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Application initialization&lt;/h2&gt;
&lt;p&gt;Instead of doing a lot of work in your &lt;code&gt;config/releases.exs&lt;/code&gt; file, keep it focused
on getting the data. Handle application config in your
&lt;a href="https://hexdocs.pm/elixir/Application.html#start/2"&gt;Application.start/2&lt;/a&gt; or
&lt;a href="https://hexdocs.pm/elixir/Supervisor.html#c:init/1"&gt;Supervisor.init/1&lt;/a&gt;. This
leverages the supervision structure of OTP, allowing components to fail and be
restarted with the right configuration.&lt;/p&gt;
&lt;h1&gt;Supervising your app&lt;/h1&gt;
&lt;p&gt;In the Erlang OTP framework, we use supervisors to start and stop processes,
restarting them in case of problems. It's turtles all the way down: you need a
supervisor to make sure your Erlang VM is running, restarting it if there is a
problem.&lt;/p&gt;
&lt;p&gt;Ignore the haters, systemd is the best supervisor we have right now, and all
the Linux distros are standardizing on it. We might as well take advantage of it.
Systemd handles all the things that "well behaved" daemons need to do. Instead
of scripts, it has declarative config that handles standard situations. It
sets up the environment, handles logging and controls permissions.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/cogini/mix_systemd"&gt;mix_systemd&lt;/a&gt; generates a systemd unit
file for your app and &lt;a href="https://github.com/cogini/mix_deploy"&gt;mix_deploy&lt;/a&gt;
generates the scripts it needs to start and configure it.&lt;/p&gt;
&lt;h1&gt;Permissions and directories&lt;/h1&gt;
&lt;p&gt;For security, following the &lt;a href="https://www.cogini.com/blog/improving-app-security-with-the-principle-of-least-privilege/"&gt;principle of least
privilege&lt;/a&gt;,
we limit the app to &lt;em&gt;only&lt;/em&gt; what it really needs to do its job. If the app is
compromised, the attacker can only do what the app can do.&lt;/p&gt;
&lt;p&gt;We use one OS user (&lt;code&gt;deploy&lt;/code&gt;) to upload the release files, and another (e.g.
&lt;code&gt;foo&lt;/code&gt;) to run the app. This means that the app only needs to have read-only access to
its own source code and config. The app user account does not need permissions
to restart the app, that's handled by the deploy user or systemd.&lt;/p&gt;
&lt;p&gt;We make use of systemd features and cloud services. Instead of writing our own
log files, we send them to journald, which sends them to CloudWatch Logs or ELK.
When running in the cloud, the app should be stateless. Instead of putting
files on the disk, it keeps state in an RDS database and uses S3 for file
storage.&lt;/p&gt;
&lt;p&gt;The result is that many apps can run without needing write access to
anything on the disk, improving security.&lt;/p&gt;
&lt;h1&gt;Deploying the app&lt;/h1&gt;
&lt;p&gt;So now we have a release tarball and some config files, time to put them on a
server.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.cogini.com/blog/deploying-an-elixir-app-to-digital-ocean-with-mix-deploy/"&gt;Build and deploy to the same server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Build with CodeBuild and deploy with CodeDeploy &lt;a href="https://github.com/cogini/mix_deploy#codedeploy"&gt;doc&lt;/a&gt;
  and &lt;a href="https://github.com/cogini/mix-deploy-example/blob/master/config/aws.exs"&gt;example&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cogini.com/blog/deploying-elixir-apps-with-ansible/"&gt;Build on a build server and deploy using Ansible&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Connecting to the outside world&lt;/h1&gt;
&lt;p&gt;The app isn't much use if we can't talk to it. There are two options for how to
receive traffic, direct or via a proxy.&lt;/p&gt;
&lt;p&gt;You can &lt;a href="https://www.cogini.com/blog/serving-your-phoenix-app-with-nginx/"&gt;serve your Phoenix app with Nginx&lt;/a&gt;,
but listening direct gives you lower latency and overall lower complexity.
Erlang can handle lots of load with no problems. For example, Heroku's routing
layer is based on Erlang. We have apps that handle a billion requests a day,
including DDOS attacks. You can &lt;a href="https://www.cogini.com/blog/benchmarking-phoenix-on-digital-ocean/"&gt;handle 3000 requests per second&lt;/a&gt;
on a simple $5/month Digital Ocean droplet.&lt;/p&gt;
&lt;p&gt;In a modern cloud app running running behind a load balancer, then listening on
port 4000 is fine, just tell the load balancer to use that port. For a
freestanding app, we need to listen to port 80 and/or port 443 for SSL.
We normally &lt;a href="https://www.cogini.com/blog/port-forwarding-with-iptables/"&gt;redirect traffic from port 80 to 4000 in the firewall using
iptables&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You may need to set some &lt;a href="https://ninenines.eu/docs/en/cowboy/2.6/manual/cowboy_http/"&gt;HTTP options&lt;/a&gt;
that &lt;a href="https://www.nginx.com/"&gt;Nginx&lt;/a&gt; was dealing with, e.g.:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:foo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;FooWeb.Endpoint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="ss"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;compress&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;protocol_options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;max_keepalive&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5_000_000&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Things you probably don't need right now&lt;/h2&gt;
&lt;p&gt;While they are cool, you don't initially need to worry about:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Hot code updates&lt;/li&gt;
&lt;li&gt;Distributed Erlang&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Additional topics&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.cogini.com/blog/deploying-an-elixir-app-to-digital-ocean-with-mix-deploy/"&gt;Deploying an Elixir app to Digital Ocean with mix_deploy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cogini.com/blog/serving-phoenix-static-assets-from-a-cdn/"&gt;Serving Phoenix static assets from a CDN&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cogini.com/blog/deploying-elixir-apps-without-sudo/"&gt;Deploying Elixir apps without sudo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cogini.com/blog/benchmarking-phoenix-on-digital-ocean/"&gt;Benchmarking Phoenix on Digital Ocean&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cogini.com/blog/improving-app-security-with-the-principle-of-least-privilege/"&gt;Improving app security with the principle of least privilege&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cogini.com/blog/presentation-on-elixir-performance/"&gt;Presentation on Elixir performance&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cogini.com/blog/incrementally-migrating-a-legacy-app-to-phoenix/"&gt;Incrementally migrating a legacy app to Phoenix&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cogini.com/blog/secure-web-applications-with-graphql-and-elixir/"&gt;Secure web applications with GraphQL and Elixir&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content><category term="DevOps"/><category term="elixir"/><category term="phoenix"/></entry><entry><title>Running Ecto migrations in a release</title><link href="https://www.cogini.com/blog/running-ecto-migrations-in-a-release/" rel="alternate"/><published>2019-06-30T00:00:00+08:00</published><updated>2019-06-30T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2019-06-30:/blog/running-ecto-migrations-in-a-release/</id><summary type="html">&lt;p&gt;In a dev or test environment, we execute the &lt;code&gt;mix ecto.migrate&lt;/code&gt; command to run database migrations. When running from a release, the mix command is not available, so we execute &lt;code&gt;Ecto.Migrator.run/4&lt;/code&gt; from code via the release's &lt;code&gt;eval&lt;/code&gt; command.&lt;/p&gt;</summary><content type="html">&lt;p&gt;In a dev or test environment, we execute the &lt;code&gt;mix ecto.migrate&lt;/code&gt; command to run
database migrations. When running from a release, however, the mix command is
not available. Instead, we need to execute &lt;code&gt;Ecto.Migrator.run/4&lt;/code&gt; from code.&lt;/p&gt;
&lt;p&gt;We do this by defining a helper function which we can then call from the release startup script:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;/srv/foo/current/bin/foo&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;eval&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;MyApp.ReleaseTasks.migrate.run&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The main point is that we need to initialize the database connection
information the same way as your application normally does, e.g. by loading a
runtime file like &lt;code&gt;/etc/foo/config.toml&lt;/code&gt; or setting environment variables.
If you use TOML configuration files, you will need to install the package
&lt;a href="https://hex.pm/packages/toml_config_provider"&gt;toml_config_provider&lt;/a&gt; from Hex.&lt;/p&gt;
&lt;p&gt;Following is a working example. Where you see &lt;code&gt;CHANGEME&lt;/code&gt;, use the name of your
project.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="kd"&gt;defmodule&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Foo.ReleaseTasks.Migrate&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@moduledoc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Mix task to run Ecto database migrations&amp;quot;&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# CHANGEME: Name of app as used by Application.get_env&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:foo&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# CHANGEME: Name of app repo module&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Foo.Repo&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;\\&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;ext_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;to_string&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;_&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;-&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;config_dir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;/etc&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ext_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Read config.exs if present&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;config_exs&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;config.exs&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;app_env&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;File&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exists?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config_exs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="nc"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;puts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Loading config file &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;config_exs&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="nc"&gt;Config.Reader&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;merge&lt;/span&gt;&lt;span class="p"&gt;([],&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Config.Reader&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;read!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config_exs&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="n"&gt;app_env&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Read TOML config if present (requires `toml_config_provider` package)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;config_toml&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;config.toml&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;app_env&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;File&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exists?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config_toml&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="nc"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;puts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Loading config file &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;config_toml&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="nc"&gt;TomlConfigProvider&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;config_toml&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="n"&gt;app_env&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nc"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;put_env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app_env&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Start requisite apps&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nc"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;puts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Starting applications..&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:crypto&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:ssl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:postgrex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:ecto&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:ecto_sql&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ensure_all_started&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nc"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;puts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Started &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;inspect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Start repo&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nc"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;puts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Starting repo&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;_pid&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;apply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:start_link&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="ss"&gt;pool_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;log&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:info&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;log_sql&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;]])&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Run migrations for the repo&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nc"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;puts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Running migrations&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;priv_dir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;app_dir&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;priv&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;migrations_dir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;priv_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;repo&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;migrations&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;all&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;apply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:pool&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;function_exported?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:unboxed_run&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;unboxed_run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;fn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nc"&gt;Ecto.Migrator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;migrations_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:up&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;else&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nc"&gt;Ecto.Migrator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;migrations_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:up&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Shut down&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;:init&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="DevOps"/><category term="deploy"/><category term="elixir"/><category term="migrations"/></entry><entry><title>Deploying Elixir apps with Ansible</title><link href="https://www.cogini.com/blog/deploying-elixir-apps-with-ansible/" rel="alternate"/><published>2019-06-16T00:00:00+08:00</published><updated>2019-06-16T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2019-06-16:/blog/deploying-elixir-apps-with-ansible/</id><summary type="html">&lt;p&gt;Deploying Elixir apps with &lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt;, an easy-to-use standard tool for managing servers.&lt;/p&gt;</summary><content type="html">&lt;p&gt;Elixir runs great on bare metal, as it easily takes advantage of all the cores
on the machine.  We can get a machine with 24 CPU cores, 32 GB of RAM and 10TB
of traffic for under $100 month. Try that in the cloud. You can cut your hosting bills by 90%.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://www.cogini.com/images/blog/dedicated-server-price.png" alt="Dedicated server price" width="100%"/&gt;&lt;/p&gt;
&lt;p&gt;When deploying to bare metal, we use &lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt;, an
easy-to-use standard tool for managing servers. It has reliable and well
documented primitives to handle logging into servers, uploading files and
executing commands: just what we need to deploy an Elixir app. We also use it
when deploying to AWS, setting up AMIs for use in auto-scaling groups.
This guide shows how to set up and run a Phoenix app, talking to a Postgres database.
It is based on this &lt;a href="https://github.com/cogini/mix-deploy-example"&gt;working template&lt;/a&gt;
and the principles in "&lt;a href="https://www.cogini.com/blog/best-practices-for-deploying-elixir-apps/"&gt;Best practices for deploying Elixir
apps&lt;/a&gt;".&lt;/p&gt;
&lt;h1&gt;Install Ansible&lt;/h1&gt;
&lt;p&gt;On your local dev machine, install Ansible with the Python pip command:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;python&lt;span class="w"&gt; &lt;/span&gt;-m&lt;span class="w"&gt; &lt;/span&gt;pip&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;ansible
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;See &lt;a href="http://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html"&gt;the Ansible docs&lt;/a&gt;
for other options.&lt;/p&gt;
&lt;h1&gt;Clone the example application&lt;/h1&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;git&lt;span class="w"&gt; &lt;/span&gt;clone&lt;span class="w"&gt; &lt;/span&gt;https://github.com/cogini/mix-deploy-example.git
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h1&gt;Set SSH alias and Ansible host list&lt;/h1&gt;
&lt;p&gt;Ansible uses ssh to talk to the server. On your local dev machine, add an ssh
host alias in the &lt;code&gt;~/.ssh/config&lt;/code&gt; file so you can reference the server using
its name:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Host web-server1
    HostName 123.45.67.89
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can use any name you like, but it needs to match the name in the Ansible
"inventory", e.g. &lt;code&gt;ansible/inventory/hosts&lt;/code&gt;. This file puts hosts
into groups so we can manage them together, e.g.:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;[web_servers]&lt;/span&gt;
&lt;span class="na"&gt;web-server1&lt;/span&gt;
&lt;span class="na"&gt;web-server2&lt;/span&gt;

&lt;span class="k"&gt;[db_servers]&lt;/span&gt;
&lt;span class="na"&gt;db-server1&lt;/span&gt;
&lt;span class="na"&gt;db-server2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;[web_servers]&lt;/code&gt; is a group of web servers. &lt;code&gt;web-server1&lt;/code&gt; is a hostname from the
&lt;code&gt;Host&lt;/code&gt; line in your &lt;code&gt;.ssh/config&lt;/code&gt; file.&lt;/p&gt;
&lt;p&gt;The configuration variables defined in &lt;code&gt;inventory/group_vars/all&lt;/code&gt; apply to all hosts in
your project. They are overridden by vars for more specific groups like
&lt;code&gt;inventory/group_vars/web_servers&lt;/code&gt; or for individual hosts, e.g.
&lt;code&gt;inventory/host_vars/web-server&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;Configure the Elixir app&lt;/h3&gt;
&lt;p&gt;We use our &lt;a href="https://galaxy.ansible.com/cogini/elixir-release"&gt;elixir-release Ansible role&lt;/a&gt;
to set up and deploy the Elixir app.&lt;/p&gt;
&lt;p&gt;Under the &lt;code&gt;ansible&lt;/code&gt; dir, edit &lt;code&gt;inventory/group_vars/all/elixir-release.yml&lt;/code&gt; to
match the Elixir app you are deploying:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="c1"&gt;# External name of the app, used to name directories and the systemd unit&lt;/span&gt;
&lt;span class="nt"&gt;elixir_release_app_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;mix-deploy-example&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;See &lt;a href="https://galaxy.ansible.com/cogini/elixir-release"&gt;the docs for the elixir-release Ansible role&lt;/a&gt;
for more options.&lt;/p&gt;
&lt;h3&gt;Configure OS accounts&lt;/h3&gt;
&lt;p&gt;We use our &lt;a href="https://galaxy.ansible.com/cogini/users"&gt;users Ansible role&lt;/a&gt; to manage
the accounts used to deploy and run the app, as well as control access for system
administrators and developers.&lt;/p&gt;
&lt;p&gt;For security, we use separate accounts to deploy the app and to run it.  The
deploy account owns the code and config files, and has rights to restart the
app. We normally use a separate account called &lt;code&gt;deploy&lt;/code&gt;.  The app runs under a
separate account with the minimum permissions it needs at runtime.  We normally create a
name matching the app, e.g. &lt;code&gt;foo&lt;/code&gt; or use a generic name like &lt;code&gt;app&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Configure users and associated ssh keys in &lt;code&gt;inventory/group_vars/all/users.yml&lt;/code&gt;.
The following defines a user &lt;code&gt;jake&lt;/code&gt;, getting the ssh keys from their GitHub profile,
and a user &lt;code&gt;ci&lt;/code&gt; for a CI/CD server account, getting the ssh key from a file.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;jake&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Jake&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Morrison&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;github&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;reachfh&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;ci&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;CI&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;server&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;ci.pub&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;users_deploy_users&lt;/code&gt; defines users that are allowed to log into the &lt;code&gt;deploy&lt;/code&gt;
account on the server via ssh, e.g.:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_deploy_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;jake&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;jenkins&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;users_global_admin_users&lt;/code&gt; defines admin users, e.g. for your ops team.  The
following creates a separate account called &lt;code&gt;bob&lt;/code&gt; on the server with sudo:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_global_admin_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;bob&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;See the docs for the &lt;a href="https://galaxy.ansible.com/cogini/users"&gt;users Ansible role&lt;/a&gt;
for more options.&lt;/p&gt;
&lt;p&gt;We split the deployment into two phases, setup and deploy. In the setup phase,
we do the tasks that require elevated permissions, e.g. creating user accounts,
creating app dirs, installing OS packages, and setting up the firewall.&lt;/p&gt;
&lt;p&gt;In the deploy phase, we push the latest code to the server and restart it.
The deploy &lt;a href="https://www.cogini.com/blog/deploying-elixir-apps-without-sudo/"&gt;doesn't require admin permissions&lt;/a&gt;,
so it can run from a regular user, e.g. the build server.&lt;/p&gt;
&lt;h1&gt;Set up the web server&lt;/h1&gt;
&lt;p&gt;Once things are configured, run Ansible to do initial server setup,
creating users and configuring the firewall.&lt;/p&gt;
&lt;p&gt;From your local dev machine, run this playbook:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;root&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;web_servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/setup-web.yml&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;-u&lt;/code&gt; flag specifies the user for bootstrapping, after that you would
normally use your own admin user. The user needs to be root or have sudo
permissions. Depending on the hosting provider's provisioning process, that
might be &lt;code&gt;root&lt;/code&gt; or a default user like &lt;code&gt;centos&lt;/code&gt; or &lt;code&gt;ubuntu&lt;/code&gt; with your keypair
installed. See &lt;code&gt;playbooks/manage-users.yml&lt;/code&gt; for other connection options, e.g.
specifying a root password manually.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;-v&lt;/code&gt; flag controls verbosity, you can add more v's to get more debug info.
The &lt;code&gt;-D&lt;/code&gt; flag shows diffs of the changes Ansible makes on the server.
If you add &lt;code&gt;--check&lt;/code&gt; to the Ansible command, it will show you the changes it is
planning to do, but doesn't actually run them.&lt;/p&gt;
&lt;p&gt;This playbook uses the &lt;code&gt;iptables&lt;/code&gt; and &lt;code&gt;iptables-http&lt;/code&gt; roles to set up the base
firewall and &lt;a href="https://www.cogini.com/blog/port-forwarding-with-iptables/"&gt;port forwarding&lt;/a&gt;
to redirect port 80/443 to the non-privilged port the app listens on.&lt;/p&gt;
&lt;h2&gt;Set up the app&lt;/h2&gt;
&lt;p&gt;Next, set up the server to run the app, creating directories and configuring
the systemd unit. The &lt;a href="https://hex.pm/packages/mix_systemd"&gt;mix_systemd&lt;/a&gt;
Elixir library creates a systemd unit file for the app, and the
&lt;a href="https://hex.pm/packages/mix_deploy"&gt;mix_deploy&lt;/a&gt; library generates utility
scripts.&lt;/p&gt;
&lt;p&gt;When &lt;a href="https://www.cogini.com/blog/deploying-phoenix-to-digital-ocean-with-mix-deploy/"&gt;deploying to a single server&lt;/a&gt;,
you can build on the server and run the scripts to set it up. When deploying to
more servers, we use the &lt;a href="https://galaxy.ansible.com/cogini/elixir-release"&gt;elixir-release Ansible role&lt;/a&gt;.
It creates directories and copies the generated files from the local project to the server.&lt;/p&gt;
&lt;p&gt;From your local dev machine, run this playbook:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$USER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;web_servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/deploy-app.yml&lt;span class="w"&gt; &lt;/span&gt;--skip-tags&lt;span class="w"&gt; &lt;/span&gt;deploy&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This runs all the tasks in the role except those tagged with &lt;code&gt;deploy&lt;/code&gt;, which
are instead run from the CI/CD server.&lt;/p&gt;
&lt;h1&gt;Deploy the app config&lt;/h1&gt;
&lt;p&gt;Instead of baking secrets like db passwords into the release file, we create a
config file and copy it to the app config dir under &lt;code&gt;/etc&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Here we use the &lt;a href="https://www.cogini.com/blog/managing-app-secrets-with-ansible/"&gt;Ansible vault to manage app secrets&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;First, configure the db server settings in &lt;code&gt;inventory/group_vars/all/db.yml&lt;/code&gt;
&lt;code&gt;db_password&lt;/code&gt; in &lt;code&gt;inventory/group_vars/all/secrets.yml&lt;/code&gt; and &lt;code&gt;secret_key_base&lt;/code&gt;
in &lt;code&gt;inventory/group_vars/web_servers/secrets.yml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;From your local dev machine, run this playbook to generate a TOML config
file with the secrets and push it to the web servers:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook -u $USER -v -l web_servers playbooks/config-web.yml -D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;See &lt;code&gt;templates/app.config.toml.j2&lt;/code&gt;.&lt;/p&gt;
&lt;h1&gt;Deploy the app to web server&lt;/h1&gt;
&lt;p&gt;Finally, we are ready to deploy the app. We build the release on a build
server, then push it to prod servers and restart.&lt;/p&gt;
&lt;p&gt;On a CI/CD server, run the following playbook:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;deploy&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;web_servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/deploy-app.yml&lt;span class="w"&gt; &lt;/span&gt;--tags&lt;span class="w"&gt; &lt;/span&gt;deploy&lt;span class="w"&gt; &lt;/span&gt;--extra-vars&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ansible_become&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;-u deploy&lt;/code&gt; specifies that the CI server should connect to the target server as
the &lt;code&gt;deploy&lt;/code&gt; user using the ssh key we set up before.&lt;/p&gt;
&lt;h1&gt;Other tasks&lt;/h1&gt;
&lt;p&gt;Ansible is useful for all sorts of admin tasks.&lt;/p&gt;
&lt;p&gt;This playbook sets up the build server, installing
&lt;a href="https://www.cogini.com/blog/using-asdf-with-elixir-and-phoenix/"&gt;ASDF version manager&lt;/a&gt; and checking out the app from git:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook -u root -v -l build_servers playbooks/setup-build.yml -D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;See &lt;code&gt;inventory/group_vars/build_servers/vars.yml&lt;/code&gt;, particularly &lt;code&gt;app_repo&lt;/code&gt; for
the URL of the git repo.&lt;/p&gt;
&lt;p&gt;You can install Ansible on the build machine with:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$USER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;web_servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/setup-ansible.yml&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The following playbook sets up a Postgres database:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$USER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;db_servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/setup-db.yml&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Configuration is in &lt;code&gt;inventory/group_vars/db_servers/postgresql.yml&lt;/code&gt;.&lt;/p&gt;</content><category term="DevOps"/><category term="elixir"/><category term="phoenix"/><category term="deployment"/></entry><entry><title>Managing user accounts with Ansible</title><link href="https://www.cogini.com/blog/managing-user-accounts-with-ansible/" rel="alternate"/><published>2019-05-24T00:00:00+08:00</published><updated>2019-05-24T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2019-05-24:/blog/managing-user-accounts-with-ansible/</id><summary type="html">&lt;p&gt;As part of developing and deploying web applications, we need to be able to manage OS user accounts and control access for developers and systems admins.  To do this, we wrote an &lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt; role to manage &lt;a href="https://galaxy.ansible.com/cogini/users"&gt;users&lt;/a&gt;.&lt;/p&gt;</summary><content type="html">&lt;p&gt;As part of developing and deploying web applications, we need to be able to manage OS user
accounts and control access for developers and systems admins.&lt;/p&gt;
&lt;p&gt;To do this, we wrote an &lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt; role to manage
&lt;a href="https://galaxy.ansible.com/cogini/users"&gt;users&lt;/a&gt;. It is basically an
opinionated wrapper on the &lt;a href="http://docs.ansible.com/ansible/latest/user_module.html"&gt;Ansible user module&lt;/a&gt;.
This post describes how it works.&lt;/p&gt;
&lt;p&gt;At a high level, we create an OS account for the app to run under and an account
to deploy the app. We may also create accounts for specific users. We then add
ssh user keys to control access to these accounts.  The ssh keys for users can
be stored locally or pulled from GitHub.&lt;/p&gt;
&lt;h2&gt;User types&lt;/h2&gt;
&lt;p&gt;We separate users into different types:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Global system admins / ops team&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When we provision a server, we automatically create accounts for our system
admin team, independent of the project.&lt;/p&gt;
&lt;p&gt;These users have their own accounts on the server with sudo permissions. We add
them to the &lt;code&gt;wheel&lt;/code&gt; or &lt;code&gt;admin&lt;/code&gt; group, then allow them to run sudo without a
password.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Project admins / power users&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These users have the same rights as global admins, but are set up on
per-project or per-server basis, controlled with inventory host/group vars.
Normally the tech lead for the project would be an admin.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Deploy account&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This user account is used to deploy the application to the server.  It owns the
application software files and has write permissions to the deploy and config
directories.&lt;/p&gt;
&lt;p&gt;The app and deploy accounts do not have sudo permissions. We may, however, configure
sudo to allow them to run specific commands to e.g. restart the app by
running &lt;code&gt;systemctl restart foo&lt;/code&gt;. That is handled by the role that installs and
configures the app, not this role.&lt;/p&gt;
&lt;p&gt;For example, make a file like &lt;code&gt;/etc/sudoers.d/deploy-foo&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;deploy ALL=(ALL) NOPASSWD: /bin/systemctl start foo, /bin/systemctl stop foo, /bin/systemctl restart foo, /bin/systemctl status foo
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Another option is to configure systemd to look for a flag file and restart the app
if it changes. See &lt;a href="https://github.com/cogini/mix_deploy#restarting"&gt;mix_deploy&lt;/a&gt; for an example.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;App account&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The application runs under this user account.&lt;/p&gt;
&lt;p&gt;For better security, we limit what this account can do. It has write access to
the directories it needs at runtime, e.g. logs, and has read-only access to its
code and config files.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Developers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Developers may need to access the &lt;code&gt;deploy&lt;/code&gt; account to deploy or the app user
account to look at the logs and debug it. We add the ssh keys for developers to
the accounts, allowing them to log in via ssh.&lt;/p&gt;
&lt;p&gt;For systems with sensitive data like health care or financial services, we tightly
control access to production servers, but give more access to dev servers.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Project users&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These users have their own OS user accounts like admins, but don't have sudo.
An example might be an account for a customer who needs to be able to log in and run
queries against the db. You can give them permissions to e.g. access the log
files for the app by adding them to the app group and setting file permissions.&lt;/p&gt;
&lt;h1&gt;Configuration&lt;/h1&gt;
&lt;p&gt;By default, this role does nothing.&lt;/p&gt;
&lt;p&gt;For it to do something, you need to define variables in group vars like
&lt;code&gt;inventory/group_vars/app-servers&lt;/code&gt;, host vars like
&lt;code&gt;inventory/host_vars/web-server&lt;/code&gt; or a &lt;code&gt;vars&lt;/code&gt; section in a playbook.&lt;/p&gt;
&lt;p&gt;You can have different settings on a host or group level to e.g. give
developers login access in the dev environment but not on prod.&lt;/p&gt;
&lt;h2&gt;App accounts&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;users_deploy_user&lt;/code&gt; specifies the account that deploys the app.
Optional, if not specified the deploy user will not be created.
&lt;code&gt;users_deploy_group&lt;/code&gt; specifies the group, defaults to &lt;code&gt;users_deploy_user&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;users_deploy_user&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;deploy&lt;/span&gt;
&lt;span class="n"&gt;users_deploy_group&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;deploy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;users_app_user&lt;/code&gt; specifies the account that runs the app.
Optional, if not specified the app user will not be created.
&lt;code&gt;users_app_group&lt;/code&gt; specifies the group, defaults to &lt;code&gt;users_app_user&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;users_app_user&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;foo&lt;/span&gt;
&lt;span class="n"&gt;users_app_group&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;foo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;User accounts&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;users_users&lt;/code&gt; defines OS account names and ssh keys for users. It is simply a list of
users, the accounts are not created until they are referenced.&lt;/p&gt;
&lt;p&gt;It is a list of dicts with four fields:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;user&lt;/code&gt;: Name of the OS account&lt;/li&gt;
&lt;li&gt;&lt;code&gt;name&lt;/code&gt;: User's name. Optional.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;key&lt;/code&gt;:  ssh public key file. Put them in e.g. your playbook &lt;code&gt;files&lt;/code&gt; directory. Optional.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;github&lt;/code&gt;: The user's GitHub id. The role gets the user keys from
  &lt;code&gt;https://github.com/{{ github }}.keys&lt;/code&gt;. Optional.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;jake&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Jake&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Morrison&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;github&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;reachfh&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;ci&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;CI&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;server&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;ci.pub&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Lists of users&lt;/h2&gt;
&lt;p&gt;After you have defined users, you add them to groups for the specific servers,
specifying the id used in the &lt;code&gt;user&lt;/code&gt; key. By default, these are empty, so if
you don't specify users, they will not be created.&lt;/p&gt;
&lt;p&gt;Global admin users with a separate OS account and sudo permissions.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_global_admin_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;jake&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Project level admin users with a separate OS account and sudo permissions.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_admin_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;fred&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Project users with a separate OS account but no sudo permission.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_regular_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;bob&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Users (ssh keys) who can access the deploy account.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_deploy_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;ci&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Users (ssh keys) who can access the app account.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_app_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;fred&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Group configuration&lt;/h2&gt;
&lt;p&gt;You can specify additional groups which the different types of users will have.
By default these lists are empty, but you can use it to fine tune access to the
app.&lt;/p&gt;
&lt;p&gt;We normally configure ssh so that a user account must must be a member of a
&lt;code&gt;sshusers&lt;/code&gt; group, or ssh will not allow anyone to log in.&lt;/p&gt;
&lt;p&gt;Add this to &lt;code&gt;/etc/ssh/sshd_config&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;AllowGroups sshusers
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Then add &lt;code&gt;sshusers&lt;/code&gt; to the &lt;code&gt;users_admin_groups&lt;/code&gt;, e.g.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_admin_groups&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;sshusers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Admin user groups&lt;/h3&gt;
&lt;p&gt;Additional groups that admin users should have.&lt;/p&gt;
&lt;p&gt;The role will always be added the &lt;code&gt;wheel&lt;/code&gt; or &lt;code&gt;admin&lt;/code&gt; group, depending on the
platform, independent of the OS groups specified here. If there are admin users
defined, then this role sets up sudo with a &lt;code&gt;/etc/sudoers.d/00-admin&lt;/code&gt; file so
that admin users can run sudo without a password.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_admin_groups&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;sshusers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Regular user groups&lt;/h3&gt;
&lt;p&gt;Additional groups that regular users should have.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_regular_groups&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;sshusers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Deploy user groups&lt;/h3&gt;
&lt;p&gt;Additional groups that the deploy account should have.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_deploy_groups&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;sshusers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;App user groups&lt;/h3&gt;
&lt;p&gt;Additional groups that the app account should have.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_app_groups&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;sshusers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Deleting users&lt;/h2&gt;
&lt;p&gt;This role puts "ansible-" in the comment when it creates users. This
allows it to track when users are added or removed from the lists and delete
the accounts.&lt;/p&gt;
&lt;p&gt;You can also specify accounts in the &lt;code&gt;users_delete_users&lt;/code&gt; list and they will be
deleted. This is useful for cleaning up legacy accounts.&lt;/p&gt;
&lt;p&gt;You can control whether to delete the user's home directory when deleting the
account with the &lt;code&gt;users_delete_remove&lt;/code&gt; and &lt;code&gt;users_delete_force&lt;/code&gt; variables.
See &lt;a href="http://docs.ansible.com/ansible/user_module.html"&gt;the Ansible docs&lt;/a&gt; for details.
For safety, these variables are &lt;code&gt;no&lt;/code&gt; by default, but if you are managing the
system users with this role, you probably want to set them to &lt;code&gt;yes&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;users_delete_remove&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;yes&lt;/span&gt;
&lt;span class="n"&gt;users_delete_force&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;yes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The role can optionally remove authorized keys from system users like 'root' or
'centos'.  This is useful for security to avoid backup root keys, once you have
set up named admin users.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;users_remove_system_authorized_keys&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Setup&lt;/h2&gt;
&lt;p&gt;This role is normally run as the first thing on a new instance.
That creates admin users and sets up their keys so that they can run the other
roles which configure the server.&lt;/p&gt;
&lt;p&gt;A project specific role is responsible for preparing the server for the app,
e.g. creating directories and installing dependencies. We normally deploy the
app from a build or CI server, without sudo, using the &lt;code&gt;deploy&lt;/code&gt; user account.&lt;/p&gt;
&lt;p&gt;Here is a typical playbook:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;Manage users&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;hosts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;*&amp;#39;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;vars&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;users_app_user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;foo&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;users_app_group&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;foo&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;users_deploy_user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;deploy&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;users_deploy_group&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;deploy&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;users_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;jake&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Jake&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Morrison&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;github&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;reachfh&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;users_app_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;jake&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;users_deploy_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;jake&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt; role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;cogini.users&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; become&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;true&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Add the host to the &lt;code&gt;inventory/hosts&lt;/code&gt; file.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;[web-servers]&lt;/span&gt;
&lt;span class="na"&gt;web-server-01&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Add the host to &lt;code&gt;.ssh/config&lt;/code&gt; or a project specific &lt;code&gt;ssh.config&lt;/code&gt; file.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Host web-server-01
    HostName 123.45.67.89
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;On a physical server where we start with a root account and no ssh keys, we
need to bootstrap the server the first time, specifying the password with -k.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;ansible&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;playbook&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;u&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;playbooks&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;manage&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;yml&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;extra&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;vars&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;ansible_host=123.45.67.89&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;On macOS the -k command requires the &lt;code&gt;askpass&lt;/code&gt; utility, which is not installed
by default, so it falls back to &lt;a href="http://www.paramiko.org/"&gt;Paramiko&lt;/a&gt;, which
doesn't understand &lt;code&gt;.ssh/config&lt;/code&gt;, so we specify &lt;code&gt;ansible_host&lt;/code&gt; manually.&lt;/p&gt;
&lt;p&gt;On following runs, after the admin users are set up, use:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook -u $USER -v -l web-server-01 playbooks/manage-users.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Deleting legacy users&lt;/h2&gt;
&lt;p&gt;Define legacy user accounts to delete in the &lt;code&gt;users_delete_users&lt;/code&gt; list, e.g.:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;ansible&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;playbook&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;u&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;playbooks&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;manage&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;yml&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c1"&gt;--extra-vars &amp;quot;users_delete_users=[fred] users_delete_remove=yes users_delete_force=yes&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="DevOps"/><category term="ansible"/></entry><entry><title>Multiple databases with Digital Ocean Managed Databases Service</title><link href="https://www.cogini.com/blog/multiple-databases-with-digital-ocean-managed-databases-service/" rel="alternate"/><published>2019-05-24T00:00:00+08:00</published><updated>2019-05-24T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2019-05-24:/blog/multiple-databases-with-digital-ocean-managed-databases-service/</id><summary type="html">&lt;p&gt;Setting up multiple databases and users with restricted permissions on &lt;a href="https://m.do.co/c/150575a88316"&gt;Digital Ocean's&lt;/a&gt; Managed Databases service.&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;a href="https://m.do.co/c/150575a88316"&gt;Digital Ocean's&lt;/a&gt; new &lt;a href="https://www.digitalocean.com/products/managed-databases/"&gt;Managed
Databases&lt;/a&gt; service
takes care of managing your database for you.&lt;/p&gt;
&lt;p&gt;If you are creating one db cluster per app, you can use the &lt;code&gt;defaultdb&lt;/code&gt;
database and &lt;code&gt;doadmin&lt;/code&gt; user. It's cheaper, however, to run multiple
applications on the same db cluster or dev/staging/prod versions of the same
app. We can create databases and users via the UI, but by default all users
have full rights to all databases.&lt;/p&gt;
&lt;p&gt;A better solution is to create users with restricted permissions.
To do that we need to set permissions via SQL.&lt;/p&gt;
&lt;p&gt;First, &lt;a href="https://www.compose.com/articles/postgresql-tips-installing-the-postgresql-client/"&gt;install the Postgres client library&lt;/a&gt;
on your Droplet (assuming Ubuntu 18.04):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;apt-get&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;postgresql-client&lt;span class="w"&gt; &lt;/span&gt;postgresql-client-common
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Log into the database shell as the &lt;code&gt;doadmin&lt;/code&gt; account:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;psql&lt;span class="w"&gt; &lt;/span&gt;-U&lt;span class="w"&gt; &lt;/span&gt;doadmin&lt;span class="w"&gt; &lt;/span&gt;-h&lt;span class="w"&gt; &lt;/span&gt;&amp;lt;host&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;-p&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;25060&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-d&lt;span class="w"&gt; &lt;/span&gt;defaultdb
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Create the production database and make it private by default:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;database&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app_prod&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;revoke&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;all&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;database&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app_prod&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;On your dev machine, generate a strong password for the db user using e.g.
&lt;a href="https://pwgen.sourceforge.io/"&gt;pwgen&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;pwgen&lt;span class="w"&gt; &lt;/span&gt;-s&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;16&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Create the prod db user, putting in the user password:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;user&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app_prod&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;with&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;encrypted&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;CHANGEME&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Give the app user rights to manage the prod db:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;grant&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;all&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;privileges&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;database&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app_prod&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app_prod&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Exit the shell with &lt;code&gt;\q&lt;/code&gt;. Confirm that the db user can connect:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;psql&lt;span class="w"&gt; &lt;/span&gt;-U&lt;span class="w"&gt; &lt;/span&gt;app_prod&lt;span class="w"&gt; &lt;/span&gt;-h&lt;span class="w"&gt; &lt;/span&gt;&amp;lt;host&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;-p&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;25060&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-d&lt;span class="w"&gt; &lt;/span&gt;app_prod
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Follow the same procedure to create a test database and user.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;database&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app_test&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;revoke&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;all&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;database&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app_test&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;user&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app_test&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;with&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;encrypted&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;CHANGEME2&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;grant&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;all&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;database&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app_test&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app_test&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;These db users have full rights on their respective databases, e.g. they can
create tables. This matches what is expected by web frameworks that run
database migrations on startup. They can't, however, create databases.&lt;/p&gt;
&lt;p&gt;Applications with higher security requirements can separate the user account
which manages the database from the user which accesses the database at
runtime. If an attacker manages to compromise your application, then they
are limited in what they can do. Other examples are creating read-only views
which hide sensitive information.&lt;/p&gt;
&lt;p&gt;See "&lt;a href="https://dba.stackexchange.com/questions/117109/how-to-manage-default-privileges-for-users-on-a-database-vs-schema"&gt;How to manage default privileges for users on a database vs schema&lt;/a&gt;" for more details.&lt;/p&gt;</content><category term="DevOps"/><category term="digial ocean"/><category term="postgresql"/></entry><entry><title>Running Ecto migrations in production releases with Distillery custom commands</title><link href="https://www.cogini.com/blog/running-ecto-migrations-in-production-releases-with-distillery-custom-commands/" rel="alternate"/><published>2019-05-24T00:00:00+08:00</published><updated>2019-05-24T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2019-05-24:/blog/running-ecto-migrations-in-production-releases-with-distillery-custom-commands/</id><summary type="html">&lt;p&gt;In a dev or test environment, we execute the &lt;code&gt;mix ecto.migrate&lt;/code&gt; command to run database migrations. When running from a release, the mix command is not available, so we execute &lt;code&gt;Ecto.Migrator.run/4&lt;/code&gt; from code via a Distillery &lt;a href="https://hexdocs.pm/distillery/extensibility/custom_commands.html"&gt;custom command&lt;/a&gt; command.&lt;/p&gt;</summary><content type="html">&lt;p&gt;In a dev or test environment, we execute the &lt;code&gt;mix ecto.migrate&lt;/code&gt; command to run
database migrations. When running from a release, however, the mix
command is not available. Instead, we need to execute &lt;code&gt;Ecto.Migrator.run/4&lt;/code&gt; from code.
We do this by adding a Distillery &lt;a href="https://hexdocs.pm/distillery/extensibility/custom_commands.html"&gt;custom command&lt;/a&gt; called
&lt;code&gt;migrate&lt;/code&gt; which we call from the release script, e.g.:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;/srv/foo/current/bin/foo&lt;span class="w"&gt; &lt;/span&gt;migrate
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The fundamentals of defining and running a migration are covered in &lt;a href="https://medium.com/flatiron-labs/run-ecto-migrations-in-production-with-distillery-boot-hooks-7f576d2b93ed"&gt;this
article&lt;/a&gt;.
In a more complex app, however, we need a bit more in our script to initialize
the environment.&lt;/p&gt;
&lt;p&gt;First, in &lt;code&gt;rel/config.exs&lt;/code&gt;, define the &lt;code&gt;migrate&lt;/code&gt; command:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;commands&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;migrate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;rel/commands/migrate.sh&amp;quot;&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Next, create the script &lt;code&gt;rel/commands/migrate.sh&lt;/code&gt;, changing &lt;code&gt;Foo&lt;/code&gt; to match your project module name:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;release_ctl&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;eval&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;--mfa&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Foo.Tasks.Migrate.run/1&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;--argv&lt;span class="w"&gt; &lt;/span&gt;--&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Finally, add the Elixir code which runs the database migrations.&lt;/p&gt;
&lt;p&gt;The main point is that we need to initialize the database connection
information the same way as your application normally does, e.g. by loading
a runtime file like &lt;code&gt;/etc/foo/config.toml&lt;/code&gt; or setting environment variables.&lt;/p&gt;
&lt;p&gt;Following is a working example. Where you see &lt;code&gt;CHANGEME&lt;/code&gt;, use the name of your
project.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="kd"&gt;defmodule&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Foo.Tasks.Migrate&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@moduledoc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Mix task to run Ecto database migrations&amp;quot;&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# CHANGEME: Name of app as used by Application.get_env&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:foo&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# CHANGEME: Name of app repo module&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Foo.Repo&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;ext_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;to_string&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;_&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;-&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;config_dir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;/etc&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ext_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;config_exs&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;config.exs&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;File&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exists?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config_exs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nc"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;puts&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Loading config file &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;config_exs&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nc"&gt;Mix.Releases.Config.Providers.Elixir&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;config_exs&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;config_toml&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;config.toml&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;File&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exists?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config_toml&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nc"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;puts&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Loading config file &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;config_toml&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nc"&gt;Toml.Provider&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="ss"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;config_toml&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;repo_config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;repo_config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Keyword&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;repo_config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:adapter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Ecto.Adapters.Postgres&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nc"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;put_env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;repo_config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Start requisite apps&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nc"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;puts&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Starting applications..&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:crypto&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:ssl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:postgrex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:ecto&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:ecto_sql&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ensure_all_started&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nc"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;puts&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Started &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;inspect&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Start repo&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nc"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;puts&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Starting repo&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;_pid&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;apply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:start_link&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="ss"&gt;pool_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;log&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:info&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;log_sql&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;]])&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Run migrations for the repo&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nc"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;puts&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Running migrations&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;priv_dir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;app_dir&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;priv&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;migrations_dir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;priv_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;repo&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;migrations&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;all&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;apply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:pool&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;function_exported?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:unboxed_run&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;unboxed_run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;fn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Ecto.Migrator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;migrations_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:up&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;else&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nc"&gt;Ecto.Migrator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@repo_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;migrations_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:up&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Shut down&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;:init&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="DevOps"/><category term="deploy"/><category term="elixir"/><category term="distillery"/><category term="migrations"/></entry><entry><title>Tuning TCP ports for your Elixir app</title><link href="https://www.cogini.com/blog/tuning-tcp-ports-for-your-elixir-app/" rel="alternate"/><published>2019-05-24T00:00:00+08:00</published><updated>2019-05-24T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2019-05-24:/blog/tuning-tcp-ports-for-your-elixir-app/</id><summary type="html">&lt;p&gt;Elixir is great at handling lots of concurrent connections. When you actually try to do this, however, you will bump up against the default OS configuration which limits the number of open filehandles/sockets. You may also run out of TCP ephemeral ports.&lt;/p&gt;</summary><content type="html">&lt;p&gt;Elixir is great at handling lots of concurrent connections. When you actually
try to do this, however, you will bump up against the default OS configuration
which limits the number of open filehandles/sockets. You may also run out of
TCP ephemeral ports.&lt;/p&gt;
&lt;p&gt;The result is poor application performance, e.g. timeouts. If you are &lt;a href="http://www.cogini.com/blog/serving-your-phoenix-app-with-nginx/"&gt;running
behind Nginx&lt;/a&gt;,
you may see it as 503 errors, with your application taking five seconds to
respond. When you look at the logs, however, the application response time is
fine.&lt;/p&gt;
&lt;p&gt;What is happening is that the client talks to Nginx, then Nginx talks to your
app, but there are not enough filehandles available, so Nginx queues the
request. You may start with 1024 by default, which is pitifully small. You will
need to raise that at each step in the config, e.g. systemd unit file, Nginx,
and Erlang VM.&lt;/p&gt;
&lt;p&gt;First, make sure that open file limits are increased at each step in the chain.&lt;/p&gt;
&lt;h2&gt;OS account limits&lt;/h2&gt;
&lt;p&gt;The OS account running the app has default limits:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;$&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;ulimit&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-a
core&lt;span class="w"&gt; &lt;/span&gt;file&lt;span class="w"&gt; &lt;/span&gt;size&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;blocks,&lt;span class="w"&gt; &lt;/span&gt;-c&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;
data&lt;span class="w"&gt; &lt;/span&gt;seg&lt;span class="w"&gt; &lt;/span&gt;size&lt;span class="w"&gt;           &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;kbytes,&lt;span class="w"&gt; &lt;/span&gt;-d&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;unlimited
scheduling&lt;span class="w"&gt; &lt;/span&gt;priority&lt;span class="w"&gt;             &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;-e&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;
file&lt;span class="w"&gt; &lt;/span&gt;size&lt;span class="w"&gt;               &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;blocks,&lt;span class="w"&gt; &lt;/span&gt;-f&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;unlimited
pending&lt;span class="w"&gt; &lt;/span&gt;signals&lt;span class="w"&gt;                 &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;-i&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;3841&lt;/span&gt;
max&lt;span class="w"&gt; &lt;/span&gt;locked&lt;span class="w"&gt; &lt;/span&gt;memory&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;kbytes,&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;16384&lt;/span&gt;
max&lt;span class="w"&gt; &lt;/span&gt;memory&lt;span class="w"&gt; &lt;/span&gt;size&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;kbytes,&lt;span class="w"&gt; &lt;/span&gt;-m&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;unlimited
open&lt;span class="w"&gt; &lt;/span&gt;files&lt;span class="w"&gt;                      &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;-n&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1024&lt;/span&gt;
pipe&lt;span class="w"&gt; &lt;/span&gt;size&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="m"&gt;512&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;bytes,&lt;span class="w"&gt; &lt;/span&gt;-p&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;8&lt;/span&gt;
POSIX&lt;span class="w"&gt; &lt;/span&gt;message&lt;span class="w"&gt; &lt;/span&gt;queues&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;bytes,&lt;span class="w"&gt; &lt;/span&gt;-q&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;819200&lt;/span&gt;
real-time&lt;span class="w"&gt; &lt;/span&gt;priority&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;-r&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;
stack&lt;span class="w"&gt; &lt;/span&gt;size&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;kbytes,&lt;span class="w"&gt; &lt;/span&gt;-s&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;8192&lt;/span&gt;
cpu&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="w"&gt;               &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;seconds,&lt;span class="w"&gt; &lt;/span&gt;-t&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;unlimited
max&lt;span class="w"&gt; &lt;/span&gt;user&lt;span class="w"&gt; &lt;/span&gt;processes&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;-u&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;3841&lt;/span&gt;
virtual&lt;span class="w"&gt; &lt;/span&gt;memory&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;kbytes,&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;unlimited
file&lt;span class="w"&gt; &lt;/span&gt;locks&lt;span class="w"&gt;                      &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;-x&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;unlimited
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Here we can see the open file limit of 1024 by default.&lt;/p&gt;
&lt;p&gt;You can see the same for a running process by looking up limits for the process id,
&lt;code&gt;cat /proc/&amp;lt;pid&amp;gt;/limits&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="c1"&gt;# cat /proc/800/limits&lt;/span&gt;
Limit&lt;span class="w"&gt;                     &lt;/span&gt;Soft&lt;span class="w"&gt; &lt;/span&gt;Limit&lt;span class="w"&gt;           &lt;/span&gt;Hard&lt;span class="w"&gt; &lt;/span&gt;Limit&lt;span class="w"&gt;           &lt;/span&gt;Units
Max&lt;span class="w"&gt; &lt;/span&gt;cpu&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="w"&gt;              &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;seconds
Max&lt;span class="w"&gt; &lt;/span&gt;file&lt;span class="w"&gt; &lt;/span&gt;size&lt;span class="w"&gt;             &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;bytes
Max&lt;span class="w"&gt; &lt;/span&gt;data&lt;span class="w"&gt; &lt;/span&gt;size&lt;span class="w"&gt;             &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;bytes
Max&lt;span class="w"&gt; &lt;/span&gt;stack&lt;span class="w"&gt; &lt;/span&gt;size&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="m"&gt;8388608&lt;/span&gt;&lt;span class="w"&gt;              &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;bytes
Max&lt;span class="w"&gt; &lt;/span&gt;core&lt;span class="w"&gt; &lt;/span&gt;file&lt;span class="w"&gt; &lt;/span&gt;size&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="w"&gt;                    &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;bytes
Max&lt;span class="w"&gt; &lt;/span&gt;resident&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;set&lt;/span&gt;&lt;span class="w"&gt;          &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;bytes
Max&lt;span class="w"&gt; &lt;/span&gt;processes&lt;span class="w"&gt;             &lt;/span&gt;&lt;span class="m"&gt;3841&lt;/span&gt;&lt;span class="w"&gt;                 &lt;/span&gt;&lt;span class="m"&gt;3841&lt;/span&gt;&lt;span class="w"&gt;                 &lt;/span&gt;processes
Max&lt;span class="w"&gt; &lt;/span&gt;open&lt;span class="w"&gt; &lt;/span&gt;files&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="m"&gt;65535&lt;/span&gt;&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="m"&gt;65535&lt;/span&gt;&lt;span class="w"&gt;                &lt;/span&gt;files
Max&lt;span class="w"&gt; &lt;/span&gt;locked&lt;span class="w"&gt; &lt;/span&gt;memory&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="m"&gt;16777216&lt;/span&gt;&lt;span class="w"&gt;             &lt;/span&gt;&lt;span class="m"&gt;16777216&lt;/span&gt;&lt;span class="w"&gt;             &lt;/span&gt;bytes
Max&lt;span class="w"&gt; &lt;/span&gt;address&lt;span class="w"&gt; &lt;/span&gt;space&lt;span class="w"&gt;         &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;bytes
Max&lt;span class="w"&gt; &lt;/span&gt;file&lt;span class="w"&gt; &lt;/span&gt;locks&lt;span class="w"&gt;            &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;locks
Max&lt;span class="w"&gt; &lt;/span&gt;pending&lt;span class="w"&gt; &lt;/span&gt;signals&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="m"&gt;3841&lt;/span&gt;&lt;span class="w"&gt;                 &lt;/span&gt;&lt;span class="m"&gt;3841&lt;/span&gt;&lt;span class="w"&gt;                 &lt;/span&gt;signals
Max&lt;span class="w"&gt; &lt;/span&gt;msgqueue&lt;span class="w"&gt; &lt;/span&gt;size&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="m"&gt;819200&lt;/span&gt;&lt;span class="w"&gt;               &lt;/span&gt;&lt;span class="m"&gt;819200&lt;/span&gt;&lt;span class="w"&gt;               &lt;/span&gt;bytes
Max&lt;span class="w"&gt; &lt;/span&gt;nice&lt;span class="w"&gt; &lt;/span&gt;priority&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="w"&gt;                    &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;
Max&lt;span class="w"&gt; &lt;/span&gt;realtime&lt;span class="w"&gt; &lt;/span&gt;priority&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="w"&gt;                    &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;
Max&lt;span class="w"&gt; &lt;/span&gt;realtime&lt;span class="w"&gt; &lt;/span&gt;timeout&lt;span class="w"&gt;      &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;unlimited&lt;span class="w"&gt;            &lt;/span&gt;us
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Create a &lt;code&gt;/etc/security/limits.d/foo-limits&lt;/code&gt; file for the account running the app:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;foo soft    nofile          1000000
foo hard    nofile          1000000
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Systemd&lt;/h2&gt;
&lt;p&gt;Systemd doesn't use the limits file, though, so you need to set the limit with variables
like &lt;a href="https://www.freedesktop.org/software/systemd/man/systemd.exec.html#LimitCPU="&gt;LimitNOFILE&lt;/a&gt;
in the unit file for your app:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;LimitNOFILE=65536
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Erlang VM options&lt;/h2&gt;
&lt;p&gt;In &lt;code&gt;rel/vm.args&lt;/code&gt; for your release, increase the number of ports on the command
line. In newer Erlang versions, the setting is &lt;code&gt;+Q 65536&lt;/code&gt;. In older ones, use
&lt;code&gt;-env ERL_MAX_PORTS 65536&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Nginx open files&lt;/h2&gt;
&lt;p&gt;If you are running Nginx in front of your app, make sure it has enough open ports.
See "&lt;a href="/blog/serving-your-phoenix-app-with-nginx/"&gt;Serving your Phoenix app with Nginx&lt;/a&gt;".&lt;/p&gt;
&lt;h2&gt;Ephemeral TCP ports&lt;/h2&gt;
&lt;p&gt;After that, you may run into lack of ephemeral TCP ports. This hits you very
hard running behind Nginx as a proxy, but can also hit you on the outbound side
when you are talking to a small number of back end servers.&lt;/p&gt;
&lt;p&gt;In TCP/IP, a connection is defined by the combination of source IP + source
port + destination IP + destination port. In this situation, all but the source
port is fixed: 127.0.0.1 + random + 127.0.0.1 + 4000. There are only 64K ports.
The TCP/IP stack won't reuse a port for 2 x maximum segment lifetime, which by
default is 2 minutes.&lt;/p&gt;
&lt;p&gt;Doing the math:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;1024 ports / 120 sec = 8.53 requests per second with default file handle limit&lt;/li&gt;
&lt;li&gt;60,000 / 120 = 500 requests per sec&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you are getting limited talking to back end servers, then it's useful to
give your server multiple IP addresses. Tell your HTTP client library to
use an IP from a pool as its source when making the request. Then the
equation turns into "source IP from pool" + random port + target IP + 80.&lt;/p&gt;
&lt;p&gt;You may be able to reuse outbound connections, with HTTP pipelining, if the
back ends support it. At a certain point, the back end servers may be the
limit. They may benefit from having more IPs as well.&lt;/p&gt;
&lt;h2&gt;DNS&lt;/h2&gt;
&lt;p&gt;DNS lookups can become an issue on outbound connections. We have had hosting
providers block us because they thought we were doing a DOS attack on their
DNS. Google DNS rate limits to 100 requests per second by default.
&lt;a href="https://www.cogini.com/blog/running-a-local-caching-dns-for-your-app/"&gt;Run a local caching DNS for your app&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Kernel tuning&lt;/h2&gt;
&lt;p&gt;You can tune the OS kernel settings to reduce the maximum segment lifetime.&lt;/p&gt;
&lt;p&gt;In &lt;code&gt;/etc/sysctl.conf&lt;/code&gt; (or a file in &lt;code&gt;/etc/sysctl.d/&lt;/code&gt;):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="gh"&gt;#&lt;/span&gt; Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15
&lt;span class="gh"&gt;#&lt;/span&gt; Recycle and Reuse TIME_WAIT sockets faster
net.ipv4.tcp_tw_reuse = 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Load the new settings:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;sysctl&lt;span class="w"&gt; &lt;/span&gt;-p
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;There are other kernel TCP settings you can tune as well, e.g.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sysctl -w fs.file-max=12000500
sysctl -w fs.nr_open=20000500
ulimit -n 20000000
sysctl -w net.ipv4.tcp_mem=&amp;#39;10000000 10000000 10000000&amp;#39;
sysctl -w net.ipv4.tcp_rmem=&amp;#39;1024 4096 16384&amp;#39;
sysctl -w net.ipv4.tcp_wmem=&amp;#39;1024 4096 16384&amp;#39;
sysctl -w net.core.rmem_max=16384
sysctl -w net.core.wmem_max=16384
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;See https://phoenixframework.org/blog/the-road-to-2-million-websocket-connections
and https://www.rabbitmq.com/networking.html#dealing-with-high-connection-churn&lt;/p&gt;
&lt;p&gt;See this &lt;a href="https://www.cogini.com/blog/presentation-on-elixir-performance/"&gt;presentation on tuning Elixir performance&lt;/a&gt;.&lt;/p&gt;</content><category term="DevOps"/><category term="elixir"/><category term="phoenix"/></entry><entry><title>Configure ssh to connect to a server</title><link href="https://www.cogini.com/blog/configure-ssh-to-connect-to-a-server/" rel="alternate"/><published>2019-05-01T00:00:00+08:00</published><updated>2019-05-01T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2019-05-01:/blog/configure-ssh-to-connect-to-a-server/</id><summary type="html">&lt;p&gt;How to configure ssh to connect to a server using an ssh key for access&lt;/p&gt;</summary><content type="html">&lt;p&gt;This article describes how to configure ssh to connect to a server using an ssh
key for access. Using ssh keys is more secure than passwords, and makes it
easier to automate systems using tools like &lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;First, &lt;a href="https://www.cogini.com/blog/creating-an-ssh-key/"&gt;create an ssh key&lt;/a&gt;,
if you don't have one already.&lt;/p&gt;
&lt;h2&gt;Configure your ssh config file&lt;/h2&gt;
&lt;p&gt;If your server only has an IP address, you can make a host alias to make it
easier to use.  Create a file on your local machine called &lt;code&gt;~/.ssh/config&lt;/code&gt;. Add
the server to it:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Host web-server
    HostName 123.45.67.89
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;a href="https://www.ssh.com/ssh/config/"&gt;ssh config file&lt;/a&gt; supports a lot more
options. For example, you can specify the userid to use on the remote server,
the port, or the key.&lt;/p&gt;
&lt;p&gt;Set the file permissions on &lt;code&gt;~/.ssh/config&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;chmod&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;~/.ssh/config
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;ssh is picky about file permissions. For security, the files and directories
need to only be readable by you, and ssh will refuse to work if they are wrong.&lt;/p&gt;
&lt;p&gt;Test it by connecting to the server:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ssh&lt;span class="w"&gt; &lt;/span&gt;user@web-server
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If it doesn't work, run ssh with &lt;code&gt;-v&lt;/code&gt; flags to see what the problem is.  You
can add more verbosity, e.g. &lt;code&gt;-vvvv&lt;/code&gt; if you need more detail.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ssh&lt;span class="w"&gt; &lt;/span&gt;-vv&lt;span class="w"&gt; &lt;/span&gt;user@web-server
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;For example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;chown&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$USER&lt;/span&gt;:staff&lt;span class="w"&gt; &lt;/span&gt;~/.ssh
chmod&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;700&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;~/.ssh

chown&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$USER&lt;/span&gt;:staff&lt;span class="w"&gt; &lt;/span&gt;~/.ssh/id_rsa
chmod&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;~/.ssh/id_rsa
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="DevOps"/><category term="ssh"/></entry><entry><title>Using ASDF with Elixir and Phoenix</title><link href="https://www.cogini.com/blog/using-asdf-with-elixir-and-phoenix/" rel="alternate"/><published>2019-03-26T00:00:00+08:00</published><updated>2019-03-26T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2019-03-26:/blog/using-asdf-with-elixir-and-phoenix/</id><summary type="html">&lt;p&gt;The &lt;a href="https://asdf-vm.com/"&gt;ASDF&lt;/a&gt; version manager lets us manage multiple versions of Erlang, Elixir and Node.js. It is a language-independent equivalent to tools like Ruby's &lt;a href="https://rvm.io/"&gt;RVM&lt;/a&gt; or &lt;a href="https://github.com/rbenv/rbenv"&gt;rbenv&lt;/a&gt;.&lt;/p&gt;</summary><content type="html">&lt;p&gt;For simple deployments, we can install Erlang and Elixir from binary packages.
Instead of using the packages that come with the OS, which are generally
out of date, you should use the
&lt;a href="https://www.erlang-solutions.com/resources/download.html"&gt;packages from Erlang Solutions&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;One disadvantage of the OS packages is that we can only have one version
installed at a time. If different projects have different versions, then we
have a problem. Similarly, when we upgrade the Erlang or Elixir version, we
should first test the code with the new version, moving a version through dev
and test environments, then putting it into production. If anything goes wrong,
we need to be able to roll back quickly. To support this, we need to precisely
specify runtime versions and keep multiple versions installed so we can quickly
switch between them.&lt;/p&gt;
&lt;p&gt;This is mainly useful for dev and build environments. For production,
&lt;a href="/blog/best-practices-for-deploying-elixir-apps/"&gt;use releases&lt;/a&gt;. Releases
bundle the VM with the code, so we don't need to install Erlang on the prod
machine at all.  We just install the release and it includes the matching VM that we
tested with.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://asdf-vm.com/"&gt;ASDF&lt;/a&gt; version manager lets us manage multiple
versions of Erlang, Elixir and Node.js. It is a language-independent equivalent
to tools like Ruby's &lt;a href="https://rvm.io/"&gt;RVM&lt;/a&gt; or &lt;a href="https://github.com/rbenv/rbenv"&gt;rbenv&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;ASDF is safe to install on your machine beside the Erlang packaged with your
OS. It won't conflict with anything else, that's its whole reason for existing.
You can, however, also use it to install a default global version.&lt;/p&gt;
&lt;p&gt;It uses the &lt;code&gt;.tool-versions&lt;/code&gt; file in your project to automatically set the
path to use specific versions. The file looks like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;erlang 21.3
elixir 1.8.1
nodejs 10.15.3
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;asdf install&lt;/code&gt; command reads the file and installs the necessary versions
on your system if necessary.&lt;/p&gt;
&lt;h2&gt;Install ASDF&lt;/h2&gt;
&lt;p&gt;These instructions assume that you are running macOS on your dev machine and
Linux on your prod machine.&lt;/p&gt;
&lt;p&gt;This script &lt;a href="https://github.com/cogini/mix-deploy-example/blob/master/bin/build-install-asdf-macos"&gt;automates the process of installing ASDF on
macOS&lt;/a&gt;.
Following are step by step commands with explanation.&lt;/p&gt;
&lt;p&gt;First, get the ASDF code:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;git&lt;span class="w"&gt; &lt;/span&gt;clone&lt;span class="w"&gt; &lt;/span&gt;https://github.com/asdf-vm/asdf.git&lt;span class="w"&gt; &lt;/span&gt;~/.asdf&lt;span class="w"&gt; &lt;/span&gt;--branch&lt;span class="w"&gt; &lt;/span&gt;v0.7.1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Add commands to your shell startup scripts:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-e&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;\n. $HOME/.asdf/asdf.sh&amp;#39;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&amp;gt;&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;~/.bash_profile
&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-e&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;\n. $HOME/.asdf/completions/asdf.bash&amp;#39;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&amp;gt;&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;~/.bash_profile
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The above commands installed from git to be more consistent with the Linux side.&lt;/p&gt;
&lt;p&gt;You can also install via &lt;a href="https://brew.sh/"&gt;Homebrew&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;brew&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;asdf
&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-e&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;\n. $(brew --prefix asdf)/asdf.sh&amp;#39;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&amp;gt;&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;~/.bash_profile
&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-e&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;\n. $(brew --prefix asdf)/etc/bash_completion.d/asdf.bash&amp;#39;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&amp;gt;&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;~/.bash_profile
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;After installing ASDF, log out of your shell and log back in to activate the
scripts.&lt;/p&gt;
&lt;p&gt;See &lt;a href="https://asdf-vm.com/#/core-manage-asdf-vm"&gt;the ASDF docs&lt;/a&gt; for more details.&lt;/p&gt;
&lt;p&gt;Next, install the ASDF plugins for Elixir and Phoenix:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;asdf&lt;span class="w"&gt; &lt;/span&gt;plugin-add&lt;span class="w"&gt; &lt;/span&gt;erlang
asdf&lt;span class="w"&gt; &lt;/span&gt;plugin-add&lt;span class="w"&gt; &lt;/span&gt;elixir
asdf&lt;span class="w"&gt; &lt;/span&gt;plugin-add&lt;span class="w"&gt; &lt;/span&gt;nodejs
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Install build dependencies&lt;/h3&gt;
&lt;p&gt;ASDF builds Erlang from source, so it needs to have some build tools and
libraries installed. Other packages like Node.js have dependencies as well.&lt;/p&gt;
&lt;p&gt;On macOS, first install &lt;a href="https://brew.sh/"&gt;Homebrew&lt;/a&gt;, then run:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="c1"&gt;# Install common ASDF plugin deps&lt;/span&gt;
brew&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;coreutils&lt;span class="w"&gt; &lt;/span&gt;automake&lt;span class="w"&gt; &lt;/span&gt;autoconf&lt;span class="w"&gt; &lt;/span&gt;openssl&lt;span class="w"&gt; &lt;/span&gt;libyaml&lt;span class="w"&gt; &lt;/span&gt;readline&lt;span class="w"&gt; &lt;/span&gt;libxslt&lt;span class="w"&gt; &lt;/span&gt;libtool

&lt;span class="c1"&gt;# Install Erlang plugin deps&lt;/span&gt;
brew&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;unixodbc&lt;span class="w"&gt; &lt;/span&gt;wxmac

&lt;span class="c1"&gt;# Install Java. It&amp;#39;s optional, but installing it avoids popup prompts.&lt;/span&gt;
&lt;span class="c1"&gt;# If you already have Java installed, you don&amp;#39;t need to do this&lt;/span&gt;
brew&lt;span class="w"&gt; &lt;/span&gt;cask&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;java

&lt;span class="c1"&gt;# Install Node.js plugin deps&lt;/span&gt;
brew&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;gpg

&lt;span class="c1"&gt;# Import node gpg keys&lt;/span&gt;
&lt;span class="c1"&gt;# This can be flaky, as it depends on network connections to the GPG key servers&lt;/span&gt;
&lt;span class="c1"&gt;# You may need to run it multiple times&lt;/span&gt;
bash&lt;span class="w"&gt; &lt;/span&gt;~/.asdf/plugins/nodejs/bin/import-release-team-keyring
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;See &lt;a href="https://github.com/asdf-vm/asdf-erlang"&gt;the ASDF Erlang docs&lt;/a&gt; for more options.&lt;/p&gt;
&lt;h3&gt;Install tools&lt;/h3&gt;
&lt;p&gt;Use ASDF to install the versions of Erlang, Elixir and Node.js specified
in the &lt;code&gt;.tool-versions&lt;/code&gt; file:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;asdf&lt;span class="w"&gt; &lt;/span&gt;install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You might need to run this twice.&lt;/p&gt;
&lt;p&gt;Install Elixir libraries into the ASDF dir for the specific Elixir version you are running:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;local.hex&lt;span class="w"&gt; &lt;/span&gt;--if-missing&lt;span class="w"&gt; &lt;/span&gt;--force
mix&lt;span class="w"&gt; &lt;/span&gt;local.rebar&lt;span class="w"&gt; &lt;/span&gt;--if-missing&lt;span class="w"&gt; &lt;/span&gt;--force

&lt;span class="c1"&gt;# Install the Phoenix archive (optional), so that you can run e.g. `mix phx.new`&lt;/span&gt;
&lt;span class="c1"&gt;# mix archive.install hex phx_new 1.4.2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Confirm that it works by building the app the normal way:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;deps.get
mix&lt;span class="w"&gt; &lt;/span&gt;deps.compile
mix&lt;span class="w"&gt; &lt;/span&gt;compile
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You should be able to run the app locally with:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;ecto.create

&lt;span class="c1"&gt;# Webpack (the new hotness)&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;assets&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;npm&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;node&lt;span class="w"&gt; &lt;/span&gt;node_modules/webpack/bin/webpack.js&lt;span class="w"&gt; &lt;/span&gt;--mode&lt;span class="w"&gt; &lt;/span&gt;development&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Brunch (old and busted)&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;assets&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;npm&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;node&lt;span class="w"&gt; &lt;/span&gt;node_modules/brunch/bin/brunch&lt;span class="w"&gt; &lt;/span&gt;build&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;iex&lt;span class="w"&gt; &lt;/span&gt;-S&lt;span class="w"&gt; &lt;/span&gt;mix&lt;span class="w"&gt; &lt;/span&gt;phx.server
open&lt;span class="w"&gt; &lt;/span&gt;http://localhost:4000/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Install ASDF on Linux&lt;/h2&gt;
&lt;p&gt;Following are &lt;a href="https://github.com/cogini/mix-deploy-example/tree/master/bin"&gt;scripts to install ASDF in a build/CI environment&lt;/a&gt;:&lt;/p&gt;
&lt;h3&gt;Install on Ubuntu 18.04&lt;/h3&gt;
&lt;p&gt;First, set up the base system and utils:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Initialize package manager and install basic utilities&amp;quot;&lt;/span&gt;

&lt;span class="nb"&gt;export&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;DEBIAN_FRONTEND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;noninteractive

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Updating package repos&amp;quot;&lt;/span&gt;
apt-get&lt;span class="w"&gt; &lt;/span&gt;update&lt;span class="w"&gt; &lt;/span&gt;-qq

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing locale &lt;/span&gt;&lt;span class="nv"&gt;$LANG&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;
&lt;span class="nv"&gt;LANG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;C&lt;span class="w"&gt; &lt;/span&gt;apt-get&lt;span class="w"&gt; &lt;/span&gt;-qq&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;locales
locale-gen&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$LANG&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Updating system packages&amp;quot;&lt;/span&gt;
apt-get&lt;span class="w"&gt; &lt;/span&gt;-qq&lt;span class="w"&gt; &lt;/span&gt;upgrade

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing apt deps&amp;quot;&lt;/span&gt;
apt-get&lt;span class="w"&gt; &lt;/span&gt;-qq&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;dialog&lt;span class="w"&gt; &lt;/span&gt;apt-utils

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing utilities&amp;quot;&lt;/span&gt;
apt-get&lt;span class="w"&gt; &lt;/span&gt;-qq&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;wget&lt;span class="w"&gt; &lt;/span&gt;curl&lt;span class="w"&gt; &lt;/span&gt;unzip&lt;span class="w"&gt; &lt;/span&gt;make&lt;span class="w"&gt; &lt;/span&gt;git
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Next, install build deps:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Install ASDF plugin dependencies&amp;quot;&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing ASDF common plugin deps&amp;quot;&lt;/span&gt;
apt-get&lt;span class="w"&gt; &lt;/span&gt;-qq&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;automake&lt;span class="w"&gt; &lt;/span&gt;autoconf&lt;span class="w"&gt; &lt;/span&gt;libreadline-dev&lt;span class="w"&gt; &lt;/span&gt;libncurses-dev&lt;span class="w"&gt; &lt;/span&gt;libssl-dev&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;libyaml-dev&lt;span class="w"&gt; &lt;/span&gt;libxslt-dev&lt;span class="w"&gt; &lt;/span&gt;libffi-dev&lt;span class="w"&gt; &lt;/span&gt;libtool&lt;span class="w"&gt; &lt;/span&gt;unixodbc-dev

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing ASDF Erlang plugin deps&amp;quot;&lt;/span&gt;
apt-get&lt;span class="w"&gt; &lt;/span&gt;-qq&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;build-essential&lt;span class="w"&gt; &lt;/span&gt;libncurses5-dev&lt;span class="w"&gt; &lt;/span&gt;libwxgtk3.0-dev&lt;span class="w"&gt; &lt;/span&gt;libgl1-mesa-dev&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;libglu1-mesa-dev&lt;span class="w"&gt; &lt;/span&gt;libpng-dev&lt;span class="w"&gt; &lt;/span&gt;libssh-dev&lt;span class="w"&gt; &lt;/span&gt;unixodbc-dev&lt;span class="w"&gt; &lt;/span&gt;xsltproc&lt;span class="w"&gt; &lt;/span&gt;fop

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing ASDF Node.js plugin deps&amp;quot;&lt;/span&gt;
apt-get&lt;span class="w"&gt; &lt;/span&gt;-qq&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;dirmngr&lt;span class="w"&gt; &lt;/span&gt;gpg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Install on CentOS 7&lt;/h3&gt;
&lt;p&gt;First, set up the base system repo and utils:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Initialize package manager and install basic utilities&amp;quot;&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing EPEL repository&amp;quot;&lt;/span&gt;
wget&lt;span class="w"&gt; &lt;/span&gt;--no-verbose&lt;span class="w"&gt; &lt;/span&gt;-P&lt;span class="w"&gt; &lt;/span&gt;/tmp&lt;span class="w"&gt; &lt;/span&gt;https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;-q&lt;span class="w"&gt; &lt;/span&gt;-y&lt;span class="w"&gt; &lt;/span&gt;/tmp/epel-release-latest-7.noarch.rpm

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Updating package repos&amp;quot;&lt;/span&gt;
yum&lt;span class="w"&gt; &lt;/span&gt;update&lt;span class="w"&gt; &lt;/span&gt;-y&lt;span class="w"&gt; &lt;/span&gt;-q

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Updating system packages&amp;quot;&lt;/span&gt;
yum&lt;span class="w"&gt; &lt;/span&gt;upgrade&lt;span class="w"&gt; &lt;/span&gt;-y&lt;span class="w"&gt; &lt;/span&gt;-q&lt;span class="w"&gt; &lt;/span&gt;--enablerepo&lt;span class="o"&gt;=&lt;/span&gt;epel

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing utilities&amp;quot;&lt;/span&gt;
yum&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;-y&lt;span class="w"&gt; &lt;/span&gt;-q&lt;span class="w"&gt; &lt;/span&gt;wget&lt;span class="w"&gt; &lt;/span&gt;curl&lt;span class="w"&gt; &lt;/span&gt;unzip&lt;span class="w"&gt; &lt;/span&gt;make&lt;span class="w"&gt; &lt;/span&gt;git
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Next, install build deps:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Install ASDF plugin dependencies&amp;quot;&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing common ASDF plugin deps&amp;quot;&lt;/span&gt;
yum&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;-y&lt;span class="w"&gt; &lt;/span&gt;-q&lt;span class="w"&gt; &lt;/span&gt;automake&lt;span class="w"&gt; &lt;/span&gt;autoconf&lt;span class="w"&gt; &lt;/span&gt;readline-devel&lt;span class="w"&gt; &lt;/span&gt;ncurses-devel&lt;span class="w"&gt; &lt;/span&gt;openssl-devel&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;libyaml-devel&lt;span class="w"&gt; &lt;/span&gt;libxslt-devel&lt;span class="w"&gt; &lt;/span&gt;libffi-devel&lt;span class="w"&gt; &lt;/span&gt;libtool&lt;span class="w"&gt; &lt;/span&gt;unixODBC-devel

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing ASDF Erlang plugin deps&amp;quot;&lt;/span&gt;
groupinstall&lt;span class="w"&gt; &lt;/span&gt;-y&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Development Tools&amp;#39;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;C Development Tools and Libraries&amp;#39;&lt;/span&gt;
yum&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;-y&lt;span class="w"&gt; &lt;/span&gt;-q&lt;span class="w"&gt; &lt;/span&gt;wxGTK3-devel&lt;span class="w"&gt; &lt;/span&gt;wxBase3&lt;span class="w"&gt; &lt;/span&gt;openssl-devel&lt;span class="w"&gt; &lt;/span&gt;libxslt&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;java-1.8.0-openjdk-devel&lt;span class="w"&gt; &lt;/span&gt;libiodbc&lt;span class="w"&gt; &lt;/span&gt;unixODBC&lt;span class="w"&gt; &lt;/span&gt;erlang-odbc

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing ASDF Node.js plugin deps&amp;quot;&lt;/span&gt;
yum&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;-y&lt;span class="w"&gt; &lt;/span&gt;-q&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;gpg&lt;span class="w"&gt; &lt;/span&gt;perl&lt;span class="w"&gt; &lt;/span&gt;perl-Digest-SHA
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Install ASDF&lt;/h3&gt;
&lt;p&gt;Finally, install ASDF, same for Ubuntu and CentOS:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;==&amp;gt; Install ASDF and plugins&amp;quot;&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;!&lt;span class="w"&gt; &lt;/span&gt;-d&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/.asdf&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;then&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing ASDF&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;git&lt;span class="w"&gt; &lt;/span&gt;clone&lt;span class="w"&gt; &lt;/span&gt;https://github.com/asdf-vm/asdf.git&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/.asdf&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;--branch&lt;span class="w"&gt; &lt;/span&gt;v0.7.1

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-e&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;\n. $HOME/.asdf/asdf.sh&amp;#39;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&amp;gt;&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;~/.bashrc
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-e&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;\n. $HOME/.asdf/completions/asdf.bash&amp;#39;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&amp;gt;&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;~/.bashrc
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="nb"&gt;source&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/.asdf/asdf.sh&amp;quot;&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;!&lt;span class="w"&gt; &lt;/span&gt;-d&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$ASDF_DIR&lt;/span&gt;&lt;span class="s2"&gt;/plugins/erlang&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;then&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing ASDF erlang plugin&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;asdf&lt;span class="w"&gt; &lt;/span&gt;plugin-add&lt;span class="w"&gt; &lt;/span&gt;erlang
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;!&lt;span class="w"&gt; &lt;/span&gt;-d&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$ASDF_DIR&lt;/span&gt;&lt;span class="s2"&gt;/plugins/elixir&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;then&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing ASDF elixir plugin&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;asdf&lt;span class="w"&gt; &lt;/span&gt;plugin-add&lt;span class="w"&gt; &lt;/span&gt;elixir
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;!&lt;span class="w"&gt; &lt;/span&gt;-d&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$ASDF_DIR&lt;/span&gt;&lt;span class="s2"&gt;/plugins/nodejs&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;then&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing ASDF nodejs plugin&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;asdf&lt;span class="w"&gt; &lt;/span&gt;plugin-add&lt;span class="w"&gt; &lt;/span&gt;nodejs

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Importing Node.js release team OpenPGP keys to main keyring&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# This can be flaky&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;bash&lt;span class="w"&gt; &lt;/span&gt;~/.asdf/plugins/nodejs/bin/import-release-team-keyring
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Install build deps&lt;/h3&gt;
&lt;p&gt;Finally, install Erlang, Elixir, etc. You would generally put it in the "build" phase
of your scripts so the &lt;code&gt;.tool-versions&lt;/code&gt; file in git controls the versions.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;===&amp;gt; Installing build deps with ASDF&amp;quot;&lt;/span&gt;
asdf&lt;span class="w"&gt; &lt;/span&gt;install
&lt;span class="c1"&gt;# Run it again to make sure all the plugins ran, as there have been issues with return codes in the past&lt;/span&gt;
asdf&lt;span class="w"&gt; &lt;/span&gt;install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="DevOps"/><category term="elixir"/><category term="phoenix"/><category term="asdf"/></entry><entry><title>Running a local caching DNS for your app</title><link href="https://www.cogini.com/blog/running-a-local-caching-dns-for-your-app/" rel="alternate"/><published>2019-01-04T00:00:00+08:00</published><updated>2019-01-04T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2019-01-04:/blog/running-a-local-caching-dns-for-your-app/</id><summary type="html">&lt;p&gt;When your app is acting as a proxy to back end servers, DNS can become a bottleneck. Running a local caching DNS server on the app server machine speeds up performance.&lt;/p&gt;</summary><content type="html">&lt;p&gt;When your app is acting as a proxy to back end servers, it it may need to do
a DNS lookup to convert the host name of the server into an IP hundreds of
times a second. DNS can become the bottleneck for requests. It also puts heavy
load on hosting provider DNS servers, which may not be able to take it, or they
may get unhappy with you. Hard coding IPs of back end servers is dangerous, as
they may change over time.&lt;/p&gt;
&lt;p&gt;Running a local caching DNS server on the app server machine caches results of
DNS lookups to make them fast and reduce load on external DNS servers.&lt;/p&gt;
&lt;p&gt;The local DNS server forwards requests to the upstream DNS servers and caches
the results.&lt;/p&gt;
&lt;p&gt;Here is an example &lt;code&gt;/etc/named.conf&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;listen&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;53&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m m-Double"&gt;127.0.0.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;filter&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;aaaa&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;directory&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;/var/named&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;dump&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;/var/named/data/cache_dump.db&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;statistics&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;/var/named/data/named_stats.txt&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;memstatistics&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;/var/named/data/named_mem_stats.txt&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;allow&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;allow&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;recursion&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m m-Double"&gt;127.0.0.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;allow&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;transfer&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;none&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;notify&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;no&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;minimal&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;responses&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;dnssec&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;enable&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;dnssec&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;validation&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;dnssec&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;lookaside&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="cm"&gt;/* Path to ISC DLV key */&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;bindkeys&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;/etc/named.iscdlv.key&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;managed&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;keys&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;directory&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;/var/named/dynamic&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;forwarders&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="m m-Double"&gt;8.8.8.8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="m m-Double"&gt;8.8.4.4&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;forward&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;only&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nx"&gt;include&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;/etc/named/rndc.key&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// controls {};&lt;/span&gt;
&lt;span class="nx"&gt;controls&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;inet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m m-Double"&gt;127.0.0.1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;allow&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;keys&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;rndc-key&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nx"&gt;statistics&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;channels&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;inet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m m-Double"&gt;127.0.0.1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bind_cache_statistics_port&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;allow&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m m-Double"&gt;127.0.0.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nx"&gt;logging&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;// http://www.zytrax.com/books/dns/ch7/logging.html&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;channel&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;debug_log&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;/var/log/named/debug.log&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;versions&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;severity&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;dynamic&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;// debug and above. Assume the global debug level defined by either the command line parameter -d or by running rndc trace&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;print&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;time&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;print&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;severity&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;print&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;category&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;channel&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;simple_log&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;/var/log/named/bind.log&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;versions&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;//severity warning;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;severity&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;notice&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;print&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;time&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;print&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;severity&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;print&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;category&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;channel&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;query_log&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;/var/log/named/query.log&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;versions&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;severity&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;dynamic&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;//severity warning;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;print&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;time&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;print&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;severity&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;print&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;category&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;category&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;simple_log&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;// Enable this to get debug logging&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;//debug_log;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;category&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;queries&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;// Enable this to get query logging, it&amp;#39;s not on for category default&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;// query_log;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nx"&gt;view&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;localhost&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;match&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;destinations&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m m-Double"&gt;127.0.0.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;zone&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;.&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;IN&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;hint&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;named.ca&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;include&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;/etc/named.rfc1912.zones&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;include&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;/etc/named.root.key&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You should also configure &lt;code&gt;/etc/resolv.conf&lt;/code&gt; to specify the caching DNS first,
followed by your upstream DNS servers:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;nameserver 127.0.0.1
nameserver 8.8.8.8
nameserver 8.8.4.4
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Here is &lt;a href="https://github.com/cogini/ansible-role-users"&gt;an Ansible role that sets up the caching DNS server&lt;/a&gt;.
It's also in &lt;a href="https://galaxy.ansible.com/cogini/bind_cache"&gt;Ansible Galaxy&lt;/a&gt;.&lt;/p&gt;</content><category term="DevOps"/><category term="dns"/><category term="elixir"/><category term="phoenix"/></entry><entry><title>Creating an ssh key</title><link href="https://www.cogini.com/blog/creating-an-ssh-key/" rel="alternate"/><published>2019-01-01T00:00:00+08:00</published><updated>2019-01-01T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2019-01-01:/blog/creating-an-ssh-key/</id><summary type="html">&lt;p&gt;How to create an ssh key for beginners&lt;/p&gt;</summary><content type="html">&lt;p&gt;Instead of using passwords, it's more secure to use ssh keys to control access
to servers.  It also makes it easier to automate systems using tools like
&lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To create an ssh key, on macOS or Linux, open up a terminal and type:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ssh-keygen&lt;span class="w"&gt; &lt;/span&gt;-t&lt;span class="w"&gt; &lt;/span&gt;rsa&lt;span class="w"&gt; &lt;/span&gt;-b&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;4096&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-C&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;your_email@example.com&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Set a pass phrase to protect access to your key (optional but recommended).
the desktop will remember your pass phrase in the keyring when you log in so
you don't have to enter it every time.&lt;/p&gt;
&lt;p&gt;Your ssh key is also used to control access to &lt;a href="https://github.com/"&gt;GitHub&lt;/a&gt;.
See &lt;a href="https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/"&gt;the GitHub docs&lt;/a&gt;.&lt;/p&gt;</content><category term="DevOps"/><category term="ssh"/></entry><entry><title>Elixir and embedded programming presentation</title><link href="https://www.cogini.com/blog/elixir-and-embedded-programming-presentation/" rel="alternate"/><published>2018-08-15T00:00:00+08:00</published><updated>2018-08-15T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-08-15:/blog/elixir-and-embedded-programming-presentation/</id><content type="html">&lt;p&gt;Here are the slides for the &lt;a href="https://www.cogini.com/files/embedded-elixir-2018.pdf"&gt;presentation on Elixir and embedded
programming&lt;/a&gt; I gave to the Elixir
LA user's group. &lt;/p&gt;
&lt;p&gt;It introduces embedded programming and how Elixir is a good match for a new generation of
embedded systems.&lt;/p&gt;</content><category term="Development"/><category term="elixir"/><category term="erlang"/><category term="embedded"/><category term="nerves"/><category term="presentations"/></entry><entry><title>Running Nerves on Amazon EC2</title><link href="https://www.cogini.com/blog/running-nerves-on-amazon-ec2/" rel="alternate"/><published>2018-08-04T00:00:00+08:00</published><updated>2018-08-04T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-08-04:/blog/running-nerves-on-amazon-ec2/</id><summary type="html">&lt;p&gt;I have been looking into the best way to deploy Elixir in the cloud. As part
of that, I have been building various AMIs with only the minimum needed to run
an Elixir app.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://nerves-project.org/"&gt;Nerves&lt;/a&gt; is a framework for building embedded
systems in Elixir. Instead of running a general purpose …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I have been looking into the best way to deploy Elixir in the cloud. As part
of that, I have been building various AMIs with only the minimum needed to run
an Elixir app.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://nerves-project.org/"&gt;Nerves&lt;/a&gt; is a framework for building embedded
systems in Elixir. Instead of running a general purpose operating system,
it does as much as possible in Elixir. It boots to an &lt;a href="https://github.com/nerves-project/erlinit"&gt;init process&lt;/a&gt;
which starts an Erlang VM, which is then responsible for starting the system.&lt;/p&gt;
&lt;p&gt;The traditional Linux boot process has a lot of legacy cruft: shell scripts
calling C programs which call kernel APIs to configure the network. With
Nerves, we do this in Elixir, using helper programs where necessary.&lt;/p&gt;
&lt;p&gt;When Elixir is in charge, we need some way to combine system-level code with
the application code on the same VM, handling system updates.  That is handled
by the &lt;a href="https://github.com/nerves-project/shoehorn"&gt;Shoehorn&lt;/a&gt; library.&lt;/p&gt;
&lt;p&gt;This post shows how you can run a Nerves application on EC2.&lt;/p&gt;
&lt;h1&gt;Nerves on EC2&lt;/h1&gt;
&lt;p&gt;I created a Nerves "&lt;a href="https://hexdocs.pm/nerves/systems.html"&gt;system&lt;/a&gt;" for
EC2, &lt;a href="https://github.com/cogini/nerves_system_ec2"&gt;nerves_system_ec2&lt;/a&gt;.  It is
based on
&lt;a href="https://github.com/nerves-project/nerves_system_x86_64"&gt;nerves_system_x86_64&lt;/a&gt;,
adding the drivers needed for EC2 to the kernel and configuring the boot
process for the EC2 environment.&lt;/p&gt;
&lt;p&gt;I created &lt;a href="https://github.com/cogini/nerves_init_ec2"&gt;nerves_init_ec2&lt;/a&gt; to bring
up the system, similar to &lt;a href="https://github.com/nerves-project/nerves_init_gadget"&gt;nerves_init_gadget&lt;/a&gt;.
AWS provides &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html"&gt;EC2 instance metadata&lt;/a&gt;
to the running system, accessed via HTTP calls to a special IP address.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;nerves_init_ec2&lt;/code&gt; uses this information at runtime to configure the instance.
The most important part is configuring the ssh console to use the SSH
&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html"&gt;key pair&lt;/a&gt;
to access the system remotely.&lt;/p&gt;
&lt;h1&gt;Building an app for EC2&lt;/h1&gt;
&lt;p&gt;Following are instructions for how to get a Nerves app running on EC2.
I created a simple Nerves app which does all this, &lt;a href="https://github.com/cogini/hello_nerves_ec2"&gt;hello_nerves_ec2&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Set up build server&lt;/h2&gt;
&lt;p&gt;I am building the system via an instance in EC2. The build server runs Ubuntu
18.04 on a t2.xlarge instance to have more resources for building the custom
nerves system. The build generates a lot of files, so I added a 100GB gp2 EBS
volume mounted under my home directory.&lt;/p&gt;
&lt;p&gt;To deploy, I write the Nerves firmware image to the disk, then take a snapshot
of the volume, turn it into an AMI and launch an instance with it. I attach a
1GB EBS volume to it for the Nerves system under &lt;code&gt;/dev/xvdn&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Install build deps&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;apt&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;build-essential&lt;span class="w"&gt; &lt;/span&gt;automake&lt;span class="w"&gt; &lt;/span&gt;autoconf&lt;span class="w"&gt; &lt;/span&gt;git&lt;span class="w"&gt; &lt;/span&gt;squashfs-tools&lt;span class="w"&gt; &lt;/span&gt;ssh-askpass
sudo&lt;span class="w"&gt; &lt;/span&gt;apt&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;libssl-dev&lt;span class="w"&gt; &lt;/span&gt;libncurses5-dev&lt;span class="w"&gt; &lt;/span&gt;bc&lt;span class="w"&gt; &lt;/span&gt;m4&lt;span class="w"&gt; &lt;/span&gt;unzip&lt;span class="w"&gt; &lt;/span&gt;cmake&lt;span class="w"&gt; &lt;/span&gt;python&lt;span class="w"&gt; &lt;/span&gt;xsltproc
sudo&lt;span class="w"&gt; &lt;/span&gt;apt&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;libmnl-dev

wget&lt;span class="w"&gt; &lt;/span&gt;https://github.com/fhunleth/fwup/releases/download/v1.2.3/fwup_1.2.3_amd64.deb
sudo&lt;span class="w"&gt; &lt;/span&gt;dpkg&lt;span class="w"&gt; &lt;/span&gt;-i&lt;span class="w"&gt; &lt;/span&gt;fwup_1.2.3_amd64.deb
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Set up ASDF for Erlang and Elixir&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;git&lt;span class="w"&gt; &lt;/span&gt;clone&lt;span class="w"&gt; &lt;/span&gt;https://github.com/asdf-vm/asdf.git&lt;span class="w"&gt; &lt;/span&gt;~/.asdf&lt;span class="w"&gt; &lt;/span&gt;--branch&lt;span class="w"&gt; &lt;/span&gt;v0.5.1
&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-e&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;\n. $HOME/.asdf/asdf.sh&amp;#39;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&amp;gt;&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;~/.bashrc
&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-e&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;\n. $HOME/.asdf/completions/asdf.bash&amp;#39;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&amp;gt;&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;~/.bashrc

asdf&lt;span class="w"&gt; &lt;/span&gt;plugin-add&lt;span class="w"&gt; &lt;/span&gt;erlang
asdf&lt;span class="w"&gt; &lt;/span&gt;plugin-add&lt;span class="w"&gt; &lt;/span&gt;elixir

asdf&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;erlang&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;21&lt;/span&gt;.0
asdf&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;elixir&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;.6.6

asdf&lt;span class="w"&gt; &lt;/span&gt;global&lt;span class="w"&gt; &lt;/span&gt;erlang&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;21&lt;/span&gt;.0
asdf&lt;span class="w"&gt; &lt;/span&gt;global&lt;span class="w"&gt; &lt;/span&gt;elixir&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;.6.6

mix&lt;span class="w"&gt; &lt;/span&gt;local.hex
mix&lt;span class="w"&gt; &lt;/span&gt;local.rebar

mix&lt;span class="w"&gt; &lt;/span&gt;archive.install&lt;span class="w"&gt; &lt;/span&gt;hex&lt;span class="w"&gt; &lt;/span&gt;nerves_bootstrap
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Get the nerves system&lt;/h2&gt;
&lt;p&gt;Check out &lt;a href="https://github.com/cogini/nerves_system_ec2"&gt;nerves_system_ec2&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;git&lt;span class="w"&gt; &lt;/span&gt;clone&lt;span class="w"&gt; &lt;/span&gt;https://github.com/cogini/nerves_system_ec2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Create a new Nerves project&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;nerves.new&lt;span class="w"&gt; &lt;/span&gt;hello_nerves_ec2
&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;hello_nerves_ec2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In &lt;code&gt;mix.exs&lt;/code&gt;, reference the new nerves system:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;defp system(&amp;quot;ec2&amp;quot;), do: [{:nerves_system_ec2, path: &amp;quot;../nerves_system_ec2&amp;quot;, runtime: false, nerves: [compile: true]}]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Add nerves_init_ec2&lt;/h2&gt;
&lt;p&gt;Add &lt;a href="https://github.com/cogini/nerves_init_ec2"&gt;nerves_init_ec2&lt;/a&gt; to &lt;code&gt;mix.exs&lt;/code&gt; deps:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="kd"&gt;defp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;deps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:nerves_runtime&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;~&amp;gt; 0.4&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:nerves_init_ec2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;github&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;cogini/nerves_init_ec2&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;system&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In &lt;code&gt;config/config.exs&lt;/code&gt;, add &lt;code&gt;nerves_init_ec2&lt;/code&gt; to the list of applications
loaded by &lt;a href="https://github.com/nerves-project/shoehorn"&gt;Shoehorn&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="n"&gt;shoehorn&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[:&lt;/span&gt;&lt;span class="n"&gt;nerves_runtime&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="n"&gt;nerves_init_ec2&lt;/span&gt;&lt;span class="o"&gt;],&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Mix&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;Project&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="o"&gt;()[:&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Configure &lt;code&gt;nerves_init_ec2&lt;/code&gt; if you like. The defaults will bring up a system
with an IEx console accessible via ssh on port 22.&lt;/p&gt;
&lt;h2&gt;Build the project&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;export&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;MIX_TARGET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ec2
mix&lt;span class="w"&gt; &lt;/span&gt;deps.get
mix&lt;span class="w"&gt; &lt;/span&gt;firmware
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Burn the firmware to the EBS volume mounted on the build server. Be careful
about the device name. In my life I have messed up my system by overwriting my
build server disks with firmware files more times than I would like to admit...&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;firmware.burn&lt;span class="w"&gt; &lt;/span&gt;-d&lt;span class="w"&gt; &lt;/span&gt;/dev/xvdn
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://github.com/nerves-project/nerves_runtime#filesystem-initialization"&gt;nerves_runtime&lt;/a&gt;
will initialize the root partition on startup, but we may only get one boot out of an AMI.
Create the filesystem in the build environment:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;mkfs.ext4&lt;span class="w"&gt; &lt;/span&gt;/dev/xvdn4
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Configure AWS&lt;/h2&gt;
&lt;p&gt;At this point, the new Nerves system is all set up on the EBS volume.
Now we need to launch an instance from it. We do that using the AWS API, so
it can run from anywhere. I normally run it from my dev machine, but you
can do it from the build server as well.&lt;/p&gt;
&lt;p&gt;In order to talk to the API, we need permissions. When you create an AWS
account, you get a "root" account with full permissions, but you should
not use it for for everyday operations. You should create an admin user
for yourself and a role for your app to run under which gives it access
to specific resources.&lt;/p&gt;
&lt;p&gt;Go to &lt;a href="https://console.aws.amazon.com/iam/home"&gt;IAM&lt;/a&gt; in the AWS console.&lt;/p&gt;
&lt;p&gt;Create a group called &lt;code&gt;Admins&lt;/code&gt; and attach policy &lt;code&gt;AdministratorAccess&lt;/code&gt;, giving members full access.&lt;/p&gt;
&lt;p&gt;Create a user for yourself, e.g. &lt;code&gt;cogini-jake&lt;/code&gt;. Under "Access type," check
"Programmatic access" and "AWS Management Console access." Set your login password.
Click "Next: Permissions" and then "Add user to group", selecting the &lt;code&gt;Admins&lt;/code&gt; group.
Record the "Access key id" and "Secret access key" now, this is your only chance.&lt;/p&gt;
&lt;p&gt;On your local dev machine, set up an AWS profile in &lt;code&gt;~/.aws/credentials&lt;/code&gt; with the keys:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;[nerves-dev]&lt;/span&gt;
&lt;span class="na"&gt;aws_access_key_id&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;XXX&lt;/span&gt;
&lt;span class="na"&gt;aws_secret_access_key&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;YYY&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Most AWS client tools will automatically look up the access keys using the profile,
so you can control keys on a per-project basis by setting the profile in the environment.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;export&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;AWS_PROFILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nerves-dev
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Install the &lt;a href="https://aws.amazon.com/cli/"&gt;AWS Command Line Interface&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;pip&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;awscli
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Create an ssh &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html"&gt;key pair&lt;/a&gt;.
Run &lt;a href="https://github.com/cogini/hello_nerves_ec2/blob/master/bin/create-key-pair.sh"&gt;create-key-pair.sh&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;bin/create-key-pair.sh&lt;span class="w"&gt; &lt;/span&gt;nerves
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Copy the output to &lt;code&gt;~/.ssh/nerves.pem&lt;/code&gt; and &lt;code&gt;chmod 0600 nerves.pem&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Create an AWS security group (like a firewall) which allows access to the ports on the instance from the Internet.
&lt;a href="https://github.com/cogini/hello_nerves_ec2/blob/master/bin/create-security-group.sh"&gt;create-security-group.sh&lt;/a&gt;
opens port 22 for the IEx console and port 80 for HTTP.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;bin/create-security-group.sh&lt;span class="w"&gt; &lt;/span&gt;nerves
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Launch the instance&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://github.com/cogini/hello_nerves_ec2/blob/master/bin/launch-instance-from-volume.sh"&gt;launch-instance-from-volume.sh&lt;/a&gt;
takes a snapshot of the volume, builds an AMI, then launches an EC2 instance with it.&lt;/p&gt;
&lt;p&gt;Edit the script to match your details:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="c1"&gt;# Name of security group&lt;/span&gt;
&lt;span class="nv"&gt;SECURITY_GROUP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nerves
&lt;span class="c1"&gt;# Name of instance to create&lt;/span&gt;
&lt;span class="nv"&gt;NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;hello_nerves_ec2
&lt;span class="nv"&gt;KEYPAIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nerves
&lt;span class="c1"&gt;# Tag instance with owner so admins can clean up stray instances&lt;/span&gt;
&lt;span class="nv"&gt;TAG_OWNER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;jake
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Run the script, specifying your volume:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;bin/launch-instance-from-volume.sh&lt;span class="w"&gt; &lt;/span&gt;vol-abc123
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The script will print the IP of the new instance, or you can get it from the AWS console.&lt;/p&gt;
&lt;h2&gt;Connect to the instance&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ssh&lt;span class="w"&gt; &lt;/span&gt;-i&lt;span class="w"&gt; &lt;/span&gt;~/.ssh/nerves.pem&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;123&lt;/span&gt;.45.67.89
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;To exit the SSH session, type &lt;code&gt;~.&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;You can view the console output in the AWS Management Console EC2 Dashboard with "Actions |
Instance Settings | Get System Log". The graphical instance screenshot is available immediately,
but the text log takes a few minutes to appear.&lt;/p&gt;
&lt;h1&gt;Creating nerves_system_ec2&lt;/h1&gt;
&lt;p&gt;Following is the process I used to create &lt;a href="https://github.com/cogini/nerves_system_ec2"&gt;nerves_system_ec2&lt;/a&gt;.
I basically followed the &lt;a href="https://hexdocs.pm/nerves/systems.html#customizing-your-own-nerves-system"&gt;Nerves documentation for customizing the
system&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Make a copy of nerves_system_x86_64 and modify it&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;git&lt;span class="w"&gt; &lt;/span&gt;clone&lt;span class="w"&gt; &lt;/span&gt;https://github.com/nerves-project/nerves_system_x86_64&lt;span class="w"&gt; &lt;/span&gt;nerves_system_ec2
&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;nerves_system_ec2/
git&lt;span class="w"&gt; &lt;/span&gt;remote&lt;span class="w"&gt; &lt;/span&gt;rename&lt;span class="w"&gt; &lt;/span&gt;origin&lt;span class="w"&gt; &lt;/span&gt;upstream
git&lt;span class="w"&gt; &lt;/span&gt;remote&lt;span class="w"&gt; &lt;/span&gt;add&lt;span class="w"&gt; &lt;/span&gt;origin&lt;span class="w"&gt; &lt;/span&gt;git@github.com:cogini/nerves_system_ec2.git
git&lt;span class="w"&gt; &lt;/span&gt;push&lt;span class="w"&gt; &lt;/span&gt;origin&lt;span class="w"&gt; &lt;/span&gt;master
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Configure the Nerves system&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;nerves.system.shell

make&lt;span class="w"&gt; &lt;/span&gt;menuconfig
make&lt;span class="w"&gt; &lt;/span&gt;savedefconfig

make&lt;span class="w"&gt; &lt;/span&gt;linux-menuconfig
make&lt;span class="w"&gt; &lt;/span&gt;linux-update-defconfig
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I used same kernel config as for my &lt;a href="https://github.com/cogini/buildroot_ec2"&gt;minimal EC2 system with Buildroot&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Modify the grub.cfg config&lt;/h2&gt;
&lt;p&gt;The kernel options are the same as
&lt;a href="https://github.com/nerves-project/nerves_system_x86_64"&gt;nerves_system_x86_64&lt;/a&gt;,
with a few additions.&lt;/p&gt;
&lt;p&gt;Since we can't manually respond to a panic, we just reboot.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;set cloud_opts=panic=1 boot.panic_on_fail
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Configure hardware options&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="gh"&gt;#&lt;/span&gt; https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html#timeout-nvme-ebs-volumes
set hardware_opts=nvme.io_timeout=4294967295
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Set up a serial console, allowing output to be captured in text with "Actions |
Instance Settings | Get System Log" or the AWS CLI command &lt;code&gt;aws ec2 get-console-output&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;set console_opts=console=tty1 console=ttyS0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is the resulting kernel command&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;linux (hd0,msdos2)/boot/bzImage root=PARTUUID=04030201-02 rootwait $console_opts $cloud_opts $hardware_opts
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Modify /etc/erlinit.config&lt;/h2&gt;
&lt;p&gt;Use the serial console:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;-c ttyS0
-s &amp;quot;/usr/bin/nbtty&amp;quot;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Modify fwup.conf/fwup-revert.conf&lt;/h2&gt;
&lt;p&gt;Reduce the size of the user filesystem to match the 1GB volume. In the cloud, we should not be storing
data on the instance, everything should be in S3 or a database.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;define(APP_PART_COUNT, 1013248)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="DevOps"/><category term="elixir"/><category term="nerves"/><category term="aws"/></entry><entry><title>Servers for beginners</title><link href="https://www.cogini.com/blog/servers-for-beginners/" rel="alternate"/><published>2018-07-06T00:00:00+08:00</published><updated>2018-07-06T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-07-06:/blog/servers-for-beginners/</id><summary type="html">&lt;p&gt;Spinning up a server is easy enough, just go to Digital Ocean and push a button. But now you are responsible for your server. What does that mean?&lt;/p&gt;</summary><content type="html">&lt;p&gt;Spinning up a server is &lt;a href="/blog/deploying-your-phoenix-app-to-digital-ocean-for-beginners/"&gt;easy
enough&lt;/a&gt;, just
go to &lt;a href="https://m.do.co/c/150575a88316"&gt;Digital Ocean&lt;/a&gt; and push a button. But
now you are &lt;em&gt;responsible&lt;/em&gt; for your server.  What does that mean?&lt;/p&gt;
&lt;p&gt;Once you have a server running, you need to take care of a few things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Data and databases&lt;/li&gt;
&lt;li&gt;Monitoring&lt;/li&gt;
&lt;li&gt;Security&lt;/li&gt;
&lt;li&gt;Maintenance&lt;/li&gt;
&lt;/ol&gt;
&lt;h1&gt;Data and databases&lt;/h1&gt;
&lt;p&gt;Cloud servers and automation tools like Ansible make it very easy to set up a
server. If the server crashes, we can just build another one in a few minutes.
The important thing is protecting the data. Managing that data and making sure
it doesn't get lost or stolen is the biggest driver for hosting.&lt;/p&gt;
&lt;p&gt;We typically have two kinds of data, uploaded files and databases. In the old
days, we would keep data files on the local hard disk, and either install a
database locally or on a dedicated database server.&lt;/p&gt;
&lt;h2&gt;Object storage&lt;/h2&gt;
&lt;p&gt;These days, it's common to put uploaded files in cloud "object" storage like
Amazon S3 or Digital Ocean
&lt;a href="https://www.digitalocean.com/products/spaces/"&gt;Spaces&lt;/a&gt;.  These services are
typically very reliable, and we can be comfortable that if we have written the
data, it won't be lost in a hard disk crash.&lt;/p&gt;
&lt;h2&gt;Backups&lt;/h2&gt;
&lt;p&gt;There is a saying, "The one way to guarantee that you will get fired as a
systems administrator is to fail to make backups."&lt;/p&gt;
&lt;p&gt;While writing to an object store can generally be considered reliable, we still
need to be careful that a program bug doesn't accidentally delete everything.
DevOps tools make it almost as easy to delete an S3 bucket as to create it.&lt;/p&gt;
&lt;p&gt;It's useful to make periodic snapshots. If you are in AWS, it's probably
sufficient to periodically sync data to another S3 bucket, perhaps in a
different region. Another option is to sync data to local disk partition and
then back it up to &lt;a href="http://www.tarsnap.com/"&gt;Tarsnap&lt;/a&gt; or &lt;a href="https://www.rsync.net/"&gt;rsync.net&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;An important part of backups is testing that they work. It's sadly common to
think that you have been making daily backups, then find that they are bad when
you need them.&lt;/p&gt;
&lt;h2&gt;Security&lt;/h2&gt;
&lt;p&gt;Make sure that the permissions on your S3 buckets are locked down. It's easy
to accidentally make them world writable. Enable encryption, so if someone
gets access to the data, they still can't read it.&lt;/p&gt;
&lt;p&gt;There is a saying that, "Backups are a way of reliably violating file system
access permissions at a distance." Make sure that your backups are as secure
as your online systems.&lt;/p&gt;
&lt;h1&gt;Databases&lt;/h1&gt;
&lt;p&gt;Running a database on the app server machine is easy. For simple applications
where the data doesn't change often, you can set up a cron job to make a &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-backup-postgresql-databases-on-an-ubuntu-vps"&gt;daily
export&lt;/a&gt;
of the data and back it up to file storage, as described above. You risk losing
up to a day of data written since the last backup.&lt;/p&gt;
&lt;p&gt;It's generally better to separate the app server from the db server. It makes
it easier to upgrade the app server without worrying about the data and provides
a bit more security.&lt;/p&gt;
&lt;p&gt;If you need higher availability, the traditional solution is to use replication
to synchronize the data from your primary database to a backup database. If the
primary fails, you promote the backup to primary and point the application to
it. You may lose a transaction or two that is in flight, but generally this is
quite safe.&lt;/p&gt;
&lt;p&gt;Cloud hosting environments like AWS and GCP have database hosting services that
take care of the database for you. If you are running in the cloud, that's what
you should use. You can run a single database and make periodic snapshot
backups, similar to the export process described above. You can also set up a
hot backup for high availability. This typically uses the magic of the cloud
service's block storage system to replicate data instead of the database native
protocol, so it is less complex and performs better.&lt;/p&gt;
&lt;h1&gt;Monitoring and alerting&lt;/h1&gt;
&lt;p&gt;You need some way to make sure your server is working other than waiting for
your customers to call. You also need to get notified when it is struggling,
e.g. having application errors.&lt;/p&gt;
&lt;h2&gt;Monitoring&lt;/h2&gt;
&lt;p&gt;There are two sides to monitoring: external and internal.&lt;/p&gt;
&lt;p&gt;External monitoring periodically checks that your service is up, e.g. by making
an HTTP request to your home page or (better) a special health check URL.&lt;/p&gt;
&lt;p&gt;Internal monitoring checks things that are not visible from outside, e.g.
disk space usage.&lt;/p&gt;
&lt;h2&gt;Logging&lt;/h2&gt;
&lt;p&gt;Your application and other services on the server will generate logs.
You need to monitor them for cries for help.&lt;/p&gt;
&lt;h2&gt;Alerting&lt;/h2&gt;
&lt;p&gt;When something goes wrong, your monitoring system should notify you. This
might be email, SMS, or a post to a Slack group.&lt;/p&gt;
&lt;h2&gt;Solutions&lt;/h2&gt;
&lt;p&gt;While you can set up your own home-grown monitoring system, it's easier
to use a service like &lt;a href="https://www.datadoghq.com/"&gt;Datadog&lt;/a&gt;. In AWS,
you can use AWS CloudWatch.&lt;/p&gt;
&lt;h1&gt;Security&lt;/h1&gt;
&lt;p&gt;When you put a server on the internet, people will &lt;em&gt;immediately&lt;/em&gt; start
attacking it. I have seen machines hacked within five minutes because
someone set the root password to abc123.&lt;/p&gt;
&lt;h2&gt;Minimizing surface area&lt;/h2&gt;
&lt;p&gt;Minimize the amount of software that you are running. Do things in
a simple way. If you are not running a piece of software, you don't
have to worry about bugs in that software being exploited to hack your system.&lt;/p&gt;
&lt;p&gt;Run a firewall that restricts access to the system. For a straightforward
application, that means opening port 22 (or an alternative port) for ssh and
port 80/443 for web. You might run a mail server like postfix to facilitate
sending mail (though a service like AWS SES or
&lt;a href="https://sendgrid.com/"&gt;Sendgrid&lt;/a&gt; would probably be
better). Block inbound mail, and you don't have to worry about the server
being hacked.&lt;/p&gt;
&lt;h2&gt;Tighten up defaults&lt;/h2&gt;
&lt;p&gt;Only allow remote access with ssh keys, not passwords. Don't allow remote
logins as root. Only allow ssh access for your own user account, not that of
the application. Use strong passwords everywhere. Use a password manager like
&lt;a href="https://www.lastpass.com/"&gt;LastPass&lt;/a&gt; to manage them.&lt;/p&gt;
&lt;h2&gt;Keep up to date&lt;/h2&gt;
&lt;p&gt;It's tempting to say, "If it ain't broke, don't fix it." Security
vulnerabilities are found all the time, though, so it's important to keep your
system up to date, or you will be caught by some script kiddie scanning for
known vulnerabilities.  Apply updates periodically. The cloud makes this easier
than it used to be. Run a stable version of Linux, e.g. CentOS or Ubuntu LTS,
and you can safely update packages. For major OS updates, spin up a new
instance to try it out. This is where having your db separate from your app
server makes things easier.&lt;/p&gt;
&lt;h2&gt;Run separate dev, staging and prod environments&lt;/h2&gt;
&lt;p&gt;The cloud makes it easy to make new instances. Keep each environment separate,
simple and consistent. That way you can keep production data safe.&lt;/p&gt;
&lt;h2&gt;Avoid vulnerable software&lt;/h2&gt;
&lt;p&gt;Keep your custom app separate from third party code, or lock it down
as much as you can.&lt;/p&gt;
&lt;p&gt;Some software is common and easy to hack. WordPress is the biggest example. It
is open source, which means that attackers can inspect it to find
vulnerabilities. There are lots of free, poorly written plugins. Millions of
people run WordPress, so it's easy to scan the internet looking for vulnerable
servers.&lt;/p&gt;
&lt;p&gt;If you need to run WordPress, use a managed hosting service, e.g.
&lt;a href="https://www.bluehost.com/products/wordpress-hosting"&gt;Bluehost&lt;/a&gt;. If you just
need a simple blog, look at static site generators, e.g.
&lt;a href="https://jekyllrb.com/"&gt;Jekyll&lt;/a&gt; or &lt;a href="https://blog.getpelican.com/"&gt;Pelican&lt;/a&gt;.
Run your blog out of an S3 bucket and there is nothing to hack or to fail.&lt;/p&gt;
&lt;h2&gt;Principle of least privilege&lt;/h2&gt;
&lt;p&gt;There are lots of other things that you can do to &lt;a href="/blog/improving-app-security-with-the-principle-of-least-privilege/"&gt;lock down your
app&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;Maintenance&lt;/h1&gt;
&lt;p&gt;If you have the above done right, there is not much that you need to do to keep
your server running.  The main thing is paying attention to disks filling up,
often from log files. Set up log file rotation, e.g. with
&lt;code&gt;logrotate&lt;/code&gt;.&lt;/p&gt;
&lt;h1&gt;Don't worry&lt;/h1&gt;
&lt;p&gt;This may seem a bit overwhelming, but it's not hard once you get the hang of
it.&lt;/p&gt;
&lt;p&gt;It's like learning to fix your own car. It may not make sense to change your
oil every time, but learn how things work. You won't get stuck on the side of
the road with a flat tire or ripped off by mechanics.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/"&gt;The Law of Leaky Abstractions&lt;/a&gt;
says that we need to understand the layers below our application. If you want
to be an architect, you need to understand the laws of physics for hosting.
Learning how things work allows you to build better applications and save lots
of money.&lt;/p&gt;
&lt;p&gt;You are now on your way through the &lt;a href="http://www.jakemorrison.com/the-five-stages-of-hosting.html"&gt;Five Stages of
Hosting&lt;/a&gt;. I got
to stage 4.5, running my own VPS and dedicated server hosting business, before I
ascended to the cloud. :-)&lt;/p&gt;</content><category term="DevOps"/><category term="deployment"/></entry><entry><title>The impact of network latency, errors, and concurrency on benchmarks</title><link href="https://www.cogini.com/blog/the-impact-of-network-latency-errors-and-concurrency-on-benchmarks/" rel="alternate"/><published>2018-07-06T00:00:00+08:00</published><updated>2018-07-06T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-07-06:/blog/the-impact-of-network-latency-errors-and-concurrency-on-benchmarks/</id><summary type="html">&lt;p&gt;The goal of benchmarking is to understand the performance of our system and how
to improve it. When we are making benchmarks, we need to make sure that they
match real world usage.&lt;/p&gt;
&lt;p&gt;In my post on &lt;a href="/blog/benchmarking-phoenix-on-digital-ocean/"&gt;Benchmarking Phoenix on Digital
Ocean&lt;/a&gt;,
changing the concurrent connections and network latency had …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The goal of benchmarking is to understand the performance of our system and how
to improve it. When we are making benchmarks, we need to make sure that they
match real world usage.&lt;/p&gt;
&lt;p&gt;In my post on &lt;a href="/blog/benchmarking-phoenix-on-digital-ocean/"&gt;Benchmarking Phoenix on Digital
Ocean&lt;/a&gt;,
changing the concurrent connections and network latency had a big effect on
the results. This post goes into more details on why.&lt;/p&gt;
&lt;p&gt;Most websites have connections from multiple clients. That's difficult to
simulate, so we often make a lot of connections from a small number of clients,
maybe just one. It's an unrealistic way to benchmark if we want to test how the
server can handle load. It ends up being limited by latency of the connection
between the client and the server.&lt;/p&gt;
&lt;h2&gt;Latency&lt;/h2&gt;
&lt;p&gt;There is a direct relationship between the latency and the number of requests
the server can handle. It is basically just math.&lt;/p&gt;
&lt;p&gt;If the server takes 1 ms to handle a request, then the fastest we can do is
1000 requests per second on a single connection, done serially.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="mf"&gt;1000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That's limited by the server, it is working as hard as it can to process requests.
This assumes that network latency is negligible.&lt;/p&gt;
&lt;p&gt;If we are testing from a client in the same data center, and round trip latency
is 4ms, then each request from the client takes 1 ms for the request and 4 ms
waiting for the network.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="mf"&gt;1000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;5&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;200&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;With one client, the server is sitting idle, it could be handling five times as many
requests if it had more clients talking with it.&lt;/p&gt;
&lt;p&gt;If we add more network latency, say 50ms, then it has a dramatic affect:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="mf"&gt;1000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;51&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;19.6&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sec&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Client concurrency&lt;/h2&gt;
&lt;p&gt;In order to accurately benchmark what the server can do, we have to add more clients.&lt;/p&gt;
&lt;p&gt;We do that in &lt;code&gt;wrk&lt;/code&gt; by adding more concurrency:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;wrk -t200 -c200 -d60s --latency -s wrk.lua &amp;quot;http://159.89.197.173&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;At a certain point, the client performance starts to affect the benchmark.
This is particularly true if we benchmark on the same server from localhost.
The client takes up resources that the server needs to handle the requests,
skewing the results.&lt;/p&gt;
&lt;h2&gt;Server concurrency&lt;/h2&gt;
&lt;p&gt;If we only have one CPU, then the server can &lt;em&gt;actually&lt;/em&gt; only do one thing at a
time. In practice, servers are often waiting on network connections, so we can
use a non-blocking I/O framework and do work on other requests when we are waiting.
Single-threaded frameworks with non-blocking I/O can handle a lot of clients,
as long as they are not doing CPU heavy work. It's efficient and easy to reason
about. As processing work increases, they start to have problems because
requests have to wait. This is the way that Node.js or Twisted Python work.&lt;/p&gt;
&lt;p&gt;More advanced languages like Java, C++ or Go use threads. Each request gets
its own independent thread, and the operating system schedules them. Programming
can be complex, though, with race conditions accessing shared data. Shared memory
means that one thread can step on another thread's memory. There are also limits
to how many OS threads can be active at a time, as each takes up system resources.&lt;/p&gt;
&lt;p&gt;Server side concurrency is where Elixir really shines. The Erlang VM handles
the low level networking using non-blocking I/O. It also schedules Elixir
processes across a moderate number of OS threads, allowing it to handle
millions of lightweight processes efficiently. This is particularly useful when
the server has multiple cores, Elixir will transparently take advantage of them
all.&lt;/p&gt;
&lt;h2&gt;TCP/IP connections and reliability&lt;/h2&gt;
&lt;p&gt;In that benchmark post, I set &lt;code&gt;max_keepalive: 5_000_000&lt;/code&gt;, which told Phoenix
to keep the network connection open between requests. This allows the client to
send multiple requests on the same TCP/IP connection, avoiding the work and
latency from the TCP three-packet handshake (SYN/SYN-ACK/ACK).&lt;/p&gt;
&lt;p&gt;This is realistic for interactive web use, where a single user may request
multiple pages or other assets. For an API, keep-alive may not be relevant.&lt;/p&gt;
&lt;p&gt;Packet loss can have a big effect on the worst case time. If the connection
loses a packet and it needs to be retransmitted, then that request will be
extra slow. For benchmarking, once the connection loses a packet, it can block
the rest of the pipeline, giving poor results, and you may get overall better
throughput without it.&lt;/p&gt;
&lt;p&gt;The real world can be nasty, though, and tuning according to benchmarks in
a clean network can lead to poor application behavior.&lt;/p&gt;
&lt;p&gt;Mobile connections may have very high packet loss (&amp;gt; 50%) and/or very high
latency (&amp;gt; 500 ms).  Retransmits happen both at the cellular-network level and
the TCP level, causing pathological network behavior. Running a reliable
protocol on top of another reliable protocol results in conflicts with timing
of acknowledgements, retries, and windowing.&lt;/p&gt;
&lt;p&gt;With 50% packet loss, every time we send a packet, it may get lost. That goes
for DNS packets, TCP handshake packets, request packets, response packets, etc.
For an API, we are probably best &lt;a href="/blog/secure-web-applications-with-graphql-and-elixir/"&gt;minimizing the number of API
requests&lt;/a&gt;, putting more
data in each request. We rely on the automatic network retransmits to get the
data there. If the connections completely die, it can take minutes to give up.
We may be better off with a connection per request again.&lt;/p&gt;
&lt;p&gt;We can also run out of TCP/IP sockets as we handle more simultaneous
requests.  As you get more traffic, it's important to &lt;a href="/blog/serving-your-phoenix-app-with-nginx/"&gt;configure each part of the
system&lt;/a&gt; to make sure you have
enough sockets, or it will fundamentally limit your application.  The defaults
are surprisingly small. This is particularly important when we are using web
sockets, as described in &lt;a href="http://phoenixframework.org/blog/the-road-to-2-million-websocket-connections"&gt;The Road to 2 Million Websocket Connections in
Phoenix&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Tuning the bottlenecks&lt;/h2&gt;
&lt;p&gt;The most important thing to look at when performance tuning is not the average,
it's the worst case. That indicates where the problems lie. Sometimes, as
described above, it's lost network packets. Other times, it's waiting on a shared
bottleneck, typically the database. Sometimes it's a garbage collection pause. The Erlang
VM manages memory on a fine-grained per-process basis, so it's particularly
good at this.&lt;/p&gt;
&lt;p&gt;See this &lt;a href="/blog/presentation-on-elixir-performance/"&gt;performance tuning presentation&lt;/a&gt;
for more details.&lt;/p&gt;</content><category term="Development"/><category term="elixir"/><category term="phoenix"/><category term="performance"/><category term="deployment"/></entry><entry><title>Managing app secrets with Ansible</title><link href="https://www.cogini.com/blog/managing-app-secrets-with-ansible/" rel="alternate"/><published>2018-06-16T00:00:00+08:00</published><updated>2018-06-16T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-06-16:/blog/managing-app-secrets-with-ansible/</id><summary type="html">&lt;p&gt;In web applications we usually have a few things that are sensitive, e.g. the
login to the production database or API keys used to access a third party API.
We need to be particularly careful about how we manage these secrets, as they
may allow attackers to access data …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In web applications we usually have a few things that are sensitive, e.g. the
login to the production database or API keys used to access a third party API.
We need to be particularly careful about how we manage these secrets, as they
may allow attackers to access data without going through the application
itself.&lt;/p&gt;
&lt;p&gt;There are trade-offs in managing secrets, depending on the size of the
organization.&lt;/p&gt;
&lt;p&gt;For a small team of developers who are also the admins, then we implicitly
trust our devs. We may store the secrets on our dev machine and push them from
there to the app servers. It's better not to have secrets in the build
environment, though, particularly if it's a third party CI service.&lt;/p&gt;
&lt;p&gt;We need to keep the secrets separate from the build, loading them separately on
the target system. That might mean putting them in a separate file or reading
them at runtime from an external source like an S3 bucket or
&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html"&gt;AWS Parameter Store&lt;/a&gt;.
Access is controlled by IAM instance roles.&lt;/p&gt;
&lt;p&gt;For secure applications like health care or finance, we need to tightly control
access to production systems. We can restrict access to your ops team.  Ideally
&lt;strong&gt;nobody&lt;/strong&gt; would log into production systems, and if they do, there is an audit log.&lt;/p&gt;
&lt;h2&gt;Ansible vault&lt;/h2&gt;
&lt;p&gt;The Ansible automation tool a &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/vault.html"&gt;vault&lt;/a&gt; function
which we can use to store keys. It automates the process of encrypting variable
data so we can check it into source control, and only people with the password
can read it. It's great for simple deployments with small teams. It has very few
moving parts and dependencies, while being reasonably secure.&lt;/p&gt;
&lt;p&gt;The following shows describes how you can use the vault.&lt;/p&gt;
&lt;h2&gt;Configuring the vault key&lt;/h2&gt;
&lt;p&gt;First, generate a vault key and put it in the file &lt;code&gt;vault.key&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;openssl&lt;span class="w"&gt; &lt;/span&gt;rand&lt;span class="w"&gt; &lt;/span&gt;-hex&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;16&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can specify the password when you are running a playbook with the
&lt;code&gt;--vault-password-file vault.key&lt;/code&gt; option, or you can make the vault password
always available by setting it in &lt;code&gt;ansible.cfg&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;vault_password_file = vault.key
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Defining secrets&lt;/h2&gt;
&lt;p&gt;There are two ways to store secrets in Ansible variable files. Either we can
encrypt the file as a whole, or we can embed encrypted data inline.&lt;/p&gt;
&lt;p&gt;To create an encrypted variable file:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-vault&lt;span class="w"&gt; &lt;/span&gt;create&lt;span class="w"&gt; &lt;/span&gt;--vault-id&lt;span class="o"&gt;=&lt;/span&gt;vault.key&lt;span class="w"&gt; &lt;/span&gt;inventory/group_vars/web_servers/secrets.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Add variables normally. When you save the file, it will be encrypted.
Later, edit the file like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-vault&lt;span class="w"&gt; &lt;/span&gt;edit&lt;span class="w"&gt; &lt;/span&gt;--vault-id&lt;span class="o"&gt;=&lt;/span&gt;vault.key&lt;span class="w"&gt; &lt;/span&gt;inventory/group_vars/web_servers/secrets.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;To encrypt a single variable inline:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;openssl&lt;span class="w"&gt; &lt;/span&gt;rand&lt;span class="w"&gt; &lt;/span&gt;-hex&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;32&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;ansible-vault&lt;span class="w"&gt; &lt;/span&gt;encrypt_string&lt;span class="w"&gt; &lt;/span&gt;--vault-id&lt;span class="o"&gt;=&lt;/span&gt;vault.key&lt;span class="w"&gt; &lt;/span&gt;--stdin-name&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;db_pass&amp;#39;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That generates encrypted data like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;db_pass&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;!vault&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;|&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;$ANSIBLE_VAULT;1.1;AES256&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;64346139623638623838396261373265666363643264333664633965306465313864653033643530&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;3830366538366139353931323662373734353064303034660a326232343036646339623638346236&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;39623832656466356338373264623331363736636262393838323135663962633339303634353763&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;3935623562343131370a383439346166323832353232373933613363383435333037343231393830&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;35326662353662316339633732323335653332346465383030633333333638323735383666303264&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;35663335623061366536363134303061323861356331373334653363383961396330386136636661&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;63373230643163633465303933396336393531633035616335653234376666663935353838356135&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;36323866346139666462&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Copy that into a standard variable file.&lt;/p&gt;
&lt;h2&gt;Generating templates&lt;/h2&gt;
&lt;p&gt;Now, you can use Ansible's template function to create a config file template.
The template file includes the variables, and Ansible will automatically decrypt
vault variables and insert them into the template.&lt;/p&gt;
&lt;p&gt;For example, here is a template which configures an Elixir Phoenix app:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;[&lt;/span&gt;&lt;span class="err"&gt;{{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;elixir_app_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;}}&lt;/span&gt;&lt;span class="k"&gt;.&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;{{ elixir_app_module }}Web.Endpoint&amp;quot;&lt;/span&gt;&lt;span class="k"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;secret_key_base&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;{{ secret_key_base }}&amp;quot;&lt;/span&gt;

&lt;span class="k"&gt;[&lt;/span&gt;&lt;span class="err"&gt;{{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;elixir_app_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;}}&lt;/span&gt;&lt;span class="k"&gt;.&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;{{ elixir_app_module }}.Repo&amp;quot;&lt;/span&gt;&lt;span class="k"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;{{ db_user }}&amp;quot;&lt;/span&gt;
&lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;{{ db_pass }}&amp;quot;&lt;/span&gt;
&lt;span class="n"&gt;database&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;{{ db_name }}&amp;quot;&lt;/span&gt;
&lt;span class="n"&gt;hostname&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;{{ db_host }}&amp;quot;&lt;/span&gt;
&lt;span class="n"&gt;ssl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;db_ssl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;
&lt;span class="n"&gt;pool_size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;db_pool_size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The Ansible task could generate it on the production server:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;Create config.toml&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;src=etc/app/config.toml.j2 dest=/etc/app/config.toml owner={{ app_user }} group={{ app_group }} mode=0644&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Generating config files to S3&lt;/h2&gt;
&lt;p&gt;When deploying in the cloud, you can generate the app config file to an S3
bucket. When the app starts up, it can then &lt;a href="https://github.com/cogini/mix_deploy#environment-setup-scripts"&gt;sync the config file from the S3
bucket&lt;/a&gt;.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;# Generate app config and upload to S3 bucket&lt;/span&gt;
&lt;span class="c1"&gt;#&lt;/span&gt;
&lt;span class="c1"&gt;# ansible-playbook -v -u $USER --extra-vars &amp;quot;env=$ENV&amp;quot; playbooks/$APP/config-app.yml -D&lt;/span&gt;

&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;Generate config file from template and upload to S3&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;hosts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;localhost&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;gather_facts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;no&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;local&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;vars&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;app_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;foo&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;comp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;app&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;file_format&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;toml&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;input_template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;../../templates/{{ app_name }}/{{ comp }}/config.{{ file_format }}.j2&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;output_file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;config.{{ file_format }}&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;vars_files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;../../vars/{{ app_name }}/{{ env }}/common.yml&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;../../vars/{{ app_name }}/{{ env }}/db-app.yml&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;../../vars/{{ app_name }}/{{ env }}/app.yml&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;../../vars/{{ app_name }}/{{ env }}/app-secrets.yml&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;Create tempfile&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;tempfile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;file&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;register&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;temp_file&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# - debug: var=temp_file.path&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;Fill template to tempfile&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;src&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;input_template&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;dest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;temp_file.path&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;no_log&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;true&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;Put config to S3&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;aws_s3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_bucket&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;object&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_bucket_prefix&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;output_file&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;src&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;temp_file.path&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;put&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;Delete tempfile&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;absent&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;temp_file.path&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="DevOps"/><category term="ansible"/><category term="elixir"/><category term="deployment"/></entry><entry><title>Improving app security with the principle of least privilege</title><link href="https://www.cogini.com/blog/improving-app-security-with-the-principle-of-least-privilege/" rel="alternate"/><published>2018-06-11T00:00:00+08:00</published><updated>2018-06-11T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-06-11:/blog/improving-app-security-with-the-principle-of-least-privilege/</id><summary type="html">&lt;p&gt;The security principle of "least privilege" means that apps should only have
the permissions that they need to do their job, nothing more. If an attacker
compromises your app, then they can't do anything outside of what the app would
normally do. They may be able to break the application …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The security principle of "least privilege" means that apps should only have
the permissions that they need to do their job, nothing more. If an attacker
compromises your app, then they can't do anything outside of what the app would
normally do. They may be able to break the application, but they can't use it
as a stepping stone to attack other systems.&lt;/p&gt;
&lt;p&gt;For example, a normal web application might need to be able to respond to HTTP
requests, query and update a database, handle file uploads, and write log
messages.&lt;/p&gt;
&lt;h1&gt;Separate app user and deploy user&lt;/h1&gt;
&lt;p&gt;The app has certain things that it needs to do at runtime, e.g. read and write
files. Create a separate user for it to run under which only has those
permissions. Use a different user account to deploy and manage the app.&lt;/p&gt;
&lt;p&gt;The app account is not shared with anything else, so we can use user
permissions to control which files the app can read and limit the system
resources it can use. The app needs to be able to read its own source or binary
files, but it doesn't need to be able to &lt;em&gt;write&lt;/em&gt; them. Have the files owned by
the deploy user and use group permissions to give the app access to them.
One app should not be able to read and write another app's files. Tighten
up permissions on directories to disallow read access to world.&lt;/p&gt;
&lt;p&gt;When deploying the app, we need to be able to restart it. Give those
permissions to the deploy user, not the app. Even better, instead of giving
sudo to the deploy user, use file triggers to restart. This lets you deploy
from e.g. a CI/CD system with limited trust.&lt;/p&gt;
&lt;h1&gt;Restrict permissions with systemd&lt;/h1&gt;
&lt;p&gt;By default, users are allowed to do quite a lot, e.g. open outbound network connections.
If you don't need these rights, restrict them on a per-user or per-app
basis. Systemd has &lt;a href="https://www.freedesktop.org/software/systemd/man/systemd.exec.html"&gt;many
features&lt;/a&gt;
to restrict what apps are allowed to do. It provides a relatively easy
declarative interface to traditional Unix features like
&lt;a href="http://man7.org/linux/man-pages/man2/chroot.2.html"&gt;chroot(2)&lt;/a&gt;. It also supports
modern Linux features which provide more ways to restrict access in a fine grained way.
You can use &lt;a href="https://wiki.centos.org/HowTos/SELinux"&gt;SELinux&lt;/a&gt; to restrict even more.&lt;/p&gt;
&lt;h1&gt;File uploads&lt;/h1&gt;
&lt;p&gt;In order to handle file local uploads, the app needs to be able to write to a
location on the local disk. We can create a directory like
&lt;code&gt;/var/lib/myapp/uploads&lt;/code&gt; and make it writable by the app user.  If all we need
to do is receive data, it's not too bad, but we usually need to show data to
other users. An attacker may try to upload a file to that directory, then
convince the system to run it as code.&lt;/p&gt;
&lt;p&gt;If our filters are not good, they might be able to upload a standalone PHP
script. More tricky, if you allow users to upload an avatar image, then the attacker
might embed some PHP code in their image file, then have web server execute
it. Configure the web server to handle user content &lt;em&gt;only&lt;/em&gt; as data.&lt;/p&gt;
&lt;p&gt;The OS can help as well. Set the
&lt;a href="http://man7.org/linux/man-pages/man2/umask.2.html"&gt;umask&lt;/a&gt; to keep the app from
being able to create executable files. Make a separate file system for the
uploads, then mount it with the &lt;code&gt;noexec&lt;/code&gt; option, and the OS will stop
executable files from running.&lt;/p&gt;
&lt;p&gt;There are plenty of additional attacks where the data from one user can be used
to attack another user, e.g. by embedding JavaScript, but that's at a different
layer in the stack.&lt;/p&gt;
&lt;h2&gt;Using a Content Delivery Network&lt;/h2&gt;
&lt;p&gt;Even better is if we don't serve static files from the app server at all.
Upload the files to an S3 bucket, then use a CDN like CloudFront to deliver the
files. That is more secure, plus it's faster and cheaper than serving static
files from an app server.&lt;/p&gt;
&lt;h2&gt;Signed URLs&lt;/h2&gt;
&lt;p&gt;Instead of giving the app permission to write files to the S3 bucket, give the
user a signed URL which allows them to upload data directly to S3. The app
doesn't need to process the files, so it doesn't need permission to read or
write them at all. Again, faster and cheaper as well.&lt;/p&gt;
&lt;h1&gt;Database access&lt;/h1&gt;
&lt;p&gt;The app needs an account and password to be able to connect to the database.
Some apps might just need read only access, but most need to update tables as
well. We can still use the database permissions system to restrict what the
app db user can do.&lt;/p&gt;
&lt;p&gt;With PostgreSQL, remove &lt;a href="https://www.postgresql.org/docs/current/static/sql-alterrole.html"&gt;role
permissions&lt;/a&gt;
like &lt;code&gt;SUPERUSER&lt;/code&gt;, &lt;code&gt;CREATEDB&lt;/code&gt;, &lt;code&gt;CREATEROLE&lt;/code&gt;. Make the db schema owned by
a different database user from the app user, then restrict the app user from creating
or altering tables. When deploying the app, &lt;a href="/blog/database-migrations-in-the-cloud/"&gt;run database migrations&lt;/a&gt;
as the user that owns the schema, not the app user.&lt;/p&gt;
&lt;p&gt;In enterprise apps, it's common to share a database between multiple apps.
Use &lt;a href="https://www.postgresql.org/docs/current/static/sql-createview.html"&gt;database views&lt;/a&gt; to
create read-only views on data, restricting data from sensitive columns.&lt;/p&gt;
&lt;h1&gt;Writing logs&lt;/h1&gt;
&lt;p&gt;In order to write a request log, we normally give the app user write
access to a directory like &lt;code&gt;/var/log/myapp&lt;/code&gt;. That &lt;em&gt;also&lt;/em&gt; means that an attacker can
read potentially sensitive information from the logs or overwrite them, covering
his or her tracks.&lt;/p&gt;
&lt;p&gt;Instead of writing our own log files, use &lt;code&gt;journald&lt;/code&gt; to manage logs. The
app writes its logs to standard out, and &lt;code&gt;systemd&lt;/code&gt; redirects them to the
journal. The app can also use the &lt;code&gt;journald&lt;/code&gt; logging API to write structured
log messages, including metadata. We then pull that data out in real time
and send it to a log aggregation system like Elasticsearch/Logstash/Kibana and
generate real-time alerts on application errors or attacks.&lt;/p&gt;
&lt;h1&gt;Listening on a non-privileged port&lt;/h1&gt;
&lt;p&gt;Listening on TCP/IP ports below 1024 requires root permissions. In the early
days of the Internet, it was common to have programs start as root, bind to the
port, then (hopefully) drop privileges. The result was e.g. the "&lt;a href="https://en.wikipedia.org/wiki/Morris_worm"&gt;Morris
worm&lt;/a&gt;" which exploited a buffer
overflow in the "sendmail" mail server, then turned around to attack other
machines.&lt;/p&gt;
&lt;p&gt;We normally &lt;a href="/blog/serving-your-phoenix-app-with-nginx/"&gt;run our apps behind a proxy like
Nginx&lt;/a&gt;.  The proxy is still running
with elevated permissions, though. Better is if we &lt;a href="/blog/port-forwarding-with-iptables/"&gt;redirect traffic from port
80 to the app in the firewall using an iptables
rule&lt;/a&gt;. Nothing runs as root.&lt;/p&gt;
&lt;h1&gt;Egress filtering&lt;/h1&gt;
&lt;p&gt;We normally use firewalls and security groups to restrict inbound traffic, but
we can also use them to restrict outbound traffic. Set up firewall rules which
allow only the intended activity of the app. If someone hacks your system and
gets access to data, make it hard to get it out. Make it impossible to use your
app server to probe other internal systems looking for vulnerabilities or
attack other sites on the internet. Whitelist IPs to make sure that requests
are coming from the correct place.&lt;/p&gt;
&lt;h1&gt;Restrict access with IAM roles&lt;/h1&gt;
&lt;p&gt;When your app is running in a cloud environment like AWS, it needs to access
various resources like S3 buckets. Instead of putting AWS keys on your instance
where they can be stolen, assign an instance role which implicitly gives it
access to resources at runtime. Amazon is making it possible to
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html"&gt;access databases using IAM roles&lt;/a&gt;.
You can &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny-ip.html"&gt;restrict access by source IP&lt;/a&gt;,
so even if an attacker gets your keys, they can't access resources from outside
the instance. The IAM instance role can give it access to encryption keys in
KMS, giving it access to encrypted S3 buckets or API keys for third party services from &lt;a href="https://docs.aws.amazon.com/kms/latest/developerguide/services-parameter-store.html"&gt;Parameter
Store&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;Putting it into practice&lt;/h1&gt;
&lt;p&gt;As you can see, there are lots of ways to restrict the deployment
environment to make life harder for attackers. The key is to think about
exactly what your application needs to do, then try to find ways
to make sure that it can only do that.&lt;/p&gt;
&lt;p&gt;There is much more to security than just the app itself. This is a
"defense in depth" approach. Security issues are inevitable, we need to limit
their impact. Attackers must break multiple layers in order to compromise
your app and make use of data and credentials that they obtain.&lt;/p&gt;
&lt;p&gt;Similarly, we need to monitor systems so that we know when they are attacked or
parts have been compromised. Add an audit trail to identify the source of
attacks, e.g. compromised user accounts or hostile internal users. Restrict
access to data so that there is a data breach, we can identify &lt;em&gt;exactly&lt;/em&gt; which
data was leaked.&lt;/p&gt;</content><category term="DevOps"/><category term="security"/><category term="systemd"/><category term="configuration"/></entry><entry><title>Running special versions of Erlang with ASDF and kerl</title><link href="https://www.cogini.com/blog/running-special-versions-of-erlang-with-asdf-and-kerl/" rel="alternate"/><published>2018-06-07T00:00:00+08:00</published><updated>2018-06-07T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-06-07:/blog/running-special-versions-of-erlang-with-asdf-and-kerl/</id><summary type="html">&lt;p&gt;Configuring the ASDF version manager and kerl build release candidate and other special versions of Erlang&lt;/p&gt;</summary><content type="html">&lt;p&gt;If you want to try out &lt;a href="http://blog.erlang.org/My-OTP-21-Highlights/"&gt;the new features in Erlang
21&lt;/a&gt; before it's released, you
will have to build it yourself, as there is no package available. Same thing if
you want to run a patch release or configure Erlang with special options.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://github.com/asdf-vm/asdf"&gt;ASDF&lt;/a&gt; version manager lets you have
multiple versions of Erlang, Elixir and Node.js on your machine at one time,
choosing the version based on the &lt;code&gt;.tool-versions&lt;/code&gt; config file.&lt;/p&gt;
&lt;p&gt;Our &lt;a href="https://github.com/cogini/elixir-deploy-template#set-up-asdf"&gt;Elixir deploy
template&lt;/a&gt; has
instructions for setting up ASDF.&lt;/p&gt;
&lt;p&gt;List the available versions:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;asdf&lt;span class="w"&gt; &lt;/span&gt;list-all&lt;span class="w"&gt; &lt;/span&gt;erlang
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Install the release candidate:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;asdf&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;erlang&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;21&lt;/span&gt;.0-rc2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;As pointed out by Jared Smith (&lt;a href="https://twitter.com/sublimecoder"&gt;@sublimecoder&lt;/a&gt;,
you can also point to a git ref in the code base on GitHub:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;asdf&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;erlang&lt;span class="w"&gt; &lt;/span&gt;ref:master
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can use any branch SHA or tag on the repo, or switch to your local fork.
See &lt;a href="https://github.com/asdf-vm/asdf-erlang"&gt;the ASDF Erlang docs&lt;/a&gt; or
&lt;a href="https://github.com/kerl/kerl"&gt;the kerl docs&lt;/a&gt; for more info.&lt;/p&gt;</content><category term="Development"/><category term="erlang"/><category term="asdf"/><category term="kerl"/></entry><entry><title>Deploying your Phoenix app to Digital Ocean for beginners</title><link href="https://www.cogini.com/blog/deploying-your-phoenix-app-to-digital-ocean-for-beginners/" rel="alternate"/><published>2018-05-26T00:00:00+08:00</published><updated>2018-05-26T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-26:/blog/deploying-your-phoenix-app-to-digital-ocean-for-beginners/</id><summary type="html">&lt;p&gt;This is a gentle introduction to getting your Phoenix app up and running on a $5/month server at &lt;a href="https://m.do.co/c/150575a88316"&gt;Digital Ocean&lt;/a&gt;. It starts from zero, assuming minimal experience with servers.&lt;/p&gt;</summary><content type="html">&lt;p&gt;This is a gentle introduction to getting your Phoenix app up and running on a
$5/month server at &lt;a href="https://m.do.co/c/150575a88316"&gt;Digital Ocean&lt;/a&gt;. It starts
from zero, assuming minimal experience with servers. It assumes you are running macOS.
If you have any questions, open an issue &lt;a href="https://github.com/cogini/elixir-deploy-template"&gt;on GitHub&lt;/a&gt; or
ping me on the &lt;code&gt;#elixir-lang&lt;/code&gt; IRC channel on Freenode, I am &lt;code&gt;reachfh&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;It is based on this &lt;a href="https://github.com/cogini/elixir-deploy-template"&gt;working template&lt;/a&gt;
and the principles in "&lt;a href="https://www.cogini.com/blog/best-practices-for-deploying-elixir-apps/"&gt;Best practices for deploying Elixir
apps&lt;/a&gt;".
The README for the &lt;a href="https://github.com/cogini/elixir-deploy-template"&gt;template&lt;/a&gt;
is very similar, but goes into more depth.&lt;/p&gt;
&lt;p&gt;It starts with a default Phoenix project with PostgreSQL database. First get
the template running, then add the &lt;a href="https://github.com/cogini/elixir-deploy-template/#changes"&gt;changes&lt;/a&gt;
to your own project. This guide works with CentOS 7, Ubuntu 16.04, Ubuntu 18.04 and Debian 9.4.
If you are &lt;a href="/blog/choosing-a-linux-distribution/"&gt;not sure which distro to use&lt;/a&gt;, choose CentOS 7.
The approach here works file dedicated servers and cloud instances as well.&lt;/p&gt;
&lt;p&gt;It uses &lt;a href="https://www.ansible.com/resources/get-started"&gt;Ansible&lt;/a&gt;, which
is an easy-to-use standard tool for managing servers. Unlike edeliver, it
has reliable and well documented primitives to handle logging
into servers, uploading files and executing commands.&lt;/p&gt;
&lt;h1&gt;Overall approach&lt;/h1&gt;
&lt;ol&gt;
&lt;li&gt;Set up the web server&lt;/li&gt;
&lt;li&gt;Set up a build environment on the server&lt;/li&gt;
&lt;li&gt;Check out code on the server from git and build a release&lt;/li&gt;
&lt;li&gt;Deploy the release to the web server&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The actual work of checking out and deploying is handled by simple shell
scripts which you run on the build server or from your dev machine via
ssh, e.g.:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="c1"&gt;# Check out latest code and build release on server&lt;/span&gt;
ssh&lt;span class="w"&gt; &lt;/span&gt;-A&lt;span class="w"&gt; &lt;/span&gt;deploy@build-server&lt;span class="w"&gt; &lt;/span&gt;build/deploy-template/scripts/build-release.sh

&lt;span class="c1"&gt;# Deploy release&lt;/span&gt;
ssh&lt;span class="w"&gt; &lt;/span&gt;-A&lt;span class="w"&gt; &lt;/span&gt;deploy@build-server&lt;span class="w"&gt; &lt;/span&gt;build/deploy-template/scripts/deploy-local.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h1&gt;Set up dev machine&lt;/h1&gt;
&lt;p&gt;Check out the project from git on your local dev machine, same as you normally
would:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;git&lt;span class="w"&gt; &lt;/span&gt;clone&lt;span class="w"&gt; &lt;/span&gt;https://github.com/cogini/elixir-deploy-template
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h1&gt;Install build tools&lt;/h1&gt;
&lt;p&gt;Install Erlang, Elixir and Node.js according to the &lt;a href="https://elixir-lang.org/install.html"&gt;instructions on the Elixir
website&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The code is tested with Erlang version 20.3, Elixir 1.6.6 and Node.js 8.2.1, so
installing those versions is best. I generally recommend using
&lt;a href="https://github.com/cogini/elixir-deploy-template#set-up-asdf"&gt;ASDF&lt;/a&gt;
to make your build environment more isolated and consistent, but it's not mandatory.&lt;/p&gt;
&lt;p&gt;Confirm that it works by building the app the normal way:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;deps.get
mix&lt;span class="w"&gt; &lt;/span&gt;deps.compile
mix&lt;span class="w"&gt; &lt;/span&gt;compile
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You should be able to run the app locally with:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;ecto.create
&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;assets&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;npm&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;node&lt;span class="w"&gt; &lt;/span&gt;node_modules/brunch/bin/brunch&lt;span class="w"&gt; &lt;/span&gt;build&lt;span class="o"&gt;)&lt;/span&gt;

iex&lt;span class="w"&gt; &lt;/span&gt;-S&lt;span class="w"&gt; &lt;/span&gt;mix&lt;span class="w"&gt; &lt;/span&gt;phx.server
open&lt;span class="w"&gt; &lt;/span&gt;http://localhost:4000/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Install Ansible&lt;/h2&gt;
&lt;p&gt;Install Ansible on your dev machine. On macOS, use pip, the Python package
manager:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;pip&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;ansible
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If pip isn’t already installed, run:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;easy_install&lt;span class="w"&gt; &lt;/span&gt;pip
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;See &lt;a href="http://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html"&gt;the Ansible docs&lt;/a&gt;
for other options.&lt;/p&gt;
&lt;h2&gt;Generate an ssh key&lt;/h2&gt;
&lt;p&gt;We use ssh keys to control access to servers instead of passwords. This is more
secure and easier to automate.&lt;/p&gt;
&lt;p&gt;Generate an ssh key if you don't have one already:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ssh-keygen&lt;span class="w"&gt; &lt;/span&gt;-t&lt;span class="w"&gt; &lt;/span&gt;rsa&lt;span class="w"&gt; &lt;/span&gt;-b&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;4096&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-C&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;your_email@example.com&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Set a pass phrase to protect access to your key (optional but recommended).
macOS and modern Linux desktops will remember your pass phrase in the keyring
when you log in so you don't have to enter it every time.&lt;/p&gt;
&lt;p&gt;Add the &lt;code&gt;~/.ssh/id_rsa.pub&lt;/code&gt; public key file to your GitHub account.
See &lt;a href="https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/"&gt;the GitHub docs&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;Set up a server&lt;/h1&gt;
&lt;p&gt;Go to &lt;a href="https://m.do.co/c/150575a88316"&gt;Digital Ocean&lt;/a&gt; (affiliate link) and
create a Droplet (virtual server).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Choose an image&lt;/strong&gt;: If you are &lt;a href="/blog/choosing-a-linux-distribution/"&gt;not sure which distro to
  use&lt;/a&gt;, choose CentOS 7&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose a size&lt;/strong&gt;: The smallest, $5/month Droplet is fine&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose a datacenter region&lt;/strong&gt;: Select a data center near you&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Add your SSH keys&lt;/strong&gt;: Select the "New SSH Key" button, and paste the
  contents of your &lt;code&gt;~/.ssh/id_rsa.pub&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose a hostname&lt;/strong&gt;: The default name is fine, but a bit awkward to type. Use
  "web-server" or whatever you like.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The defaults for everything else are fine. Click the "Create" button.&lt;/p&gt;
&lt;p&gt;Add the host to the &lt;code&gt;~/.ssh/config&lt;/code&gt; file on your dev machine:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Host web-server
    HostName 123.45.67.89
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The file permissions on &lt;code&gt;~/.ssh/config&lt;/code&gt; need to be secure or ssh will be unhappy:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;chmod&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;600&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;~/.ssh/config
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Test it by connecting to the server:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ssh&lt;span class="w"&gt; &lt;/span&gt;root@web-server
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If it doesn't work, run ssh with &lt;code&gt;-v&lt;/code&gt; flags to see what the problem is.
You can add more verbosity, e.g. &lt;code&gt;-vvvv&lt;/code&gt; if you need more detail.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ssh&lt;span class="w"&gt; &lt;/span&gt;-vv&lt;span class="w"&gt; &lt;/span&gt;root@web-server
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;File permissions are the most common cause of problems with ssh. Another common
problem is forgetting to add your ssh key when creating the Droplet. Destroy
the Droplet and create it again.&lt;/p&gt;
&lt;h2&gt;Configure Ansible&lt;/h2&gt;
&lt;p&gt;Add the hosts to the groups in the Ansible inventory &lt;code&gt;ansible/inventory/hosts&lt;/code&gt;
file in the project:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;[web-servers]&lt;/span&gt;
&lt;span class="na"&gt;web-server&lt;/span&gt;

&lt;span class="k"&gt;[build-servers]&lt;/span&gt;
&lt;span class="na"&gt;web-server&lt;/span&gt;

&lt;span class="k"&gt;[db-servers]&lt;/span&gt;
&lt;span class="na"&gt;web-server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;[web-servers]&lt;/code&gt; is a group of web servers. &lt;code&gt;web-server&lt;/code&gt; is the hostname from
the &lt;code&gt;Host&lt;/code&gt; line in your &lt;code&gt;.ssh/config&lt;/code&gt; file. &lt;code&gt;[build-servers]&lt;/code&gt; is the group of
build servers. It can be the same as your web server. &lt;code&gt;[db-servers]&lt;/code&gt; is the
group of database servers, which can be the same.&lt;/p&gt;
&lt;p&gt;If you are using Ubuntu or Debian, add the host to the &lt;code&gt;[py3-hosts]&lt;/code&gt; group, and
it will use the Python 3 interpreter that comes by default on the server.&lt;/p&gt;
&lt;p&gt;The repo has multiple hosts in the groups for testing different OS versions,
comment them out.&lt;/p&gt;
&lt;h3&gt;Set Ansible variables&lt;/h3&gt;
&lt;p&gt;The configuration variables defined in &lt;code&gt;inventory/group_vars/all&lt;/code&gt; apply to all hosts in
your project. They are overridden by vars in more specific groups like
&lt;code&gt;inventory/group_vars/web-servers&lt;/code&gt; or for individual hosts, e.g.
&lt;code&gt;inventory/host_vars/web-server&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Ansible uses ssh to connect to the server. These playbooks use ssh keys to
control logins to server accounts, not passwords. The &lt;code&gt;users&lt;/code&gt; Ansible role
manages accounts.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;inventory/group_vars/all/users.yml&lt;/code&gt; file defines a global list of users and
system admins. It has a live user (me!), &lt;strong&gt;change it to match your details&lt;/strong&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;users_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;jake&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Jake&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Morrison&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;github&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;reachfh&lt;/span&gt;

&lt;span class="nt"&gt;users_global_admin_users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;jake&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;inventory/group_vars/all/elixir-release.yml&lt;/code&gt; file specifies the
app settings:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="c1"&gt;# External name of the app, used to name directories and the systemd process&lt;/span&gt;
&lt;span class="nt"&gt;elixir_release_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;deploy-template&lt;/span&gt;

&lt;span class="c1"&gt;# Internal &amp;quot;Elixir&amp;quot; name of the app, used to by release to name things&lt;/span&gt;
&lt;span class="nt"&gt;elixir_release_name_code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;deploy_template&lt;/span&gt;

&lt;span class="c1"&gt;# Name of your organization or overall project, used to make a unique dir prefix&lt;/span&gt;
&lt;span class="nt"&gt;elixir_release_org&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;myorg&lt;/span&gt;

&lt;span class="c1"&gt;# OS user the app runs under&lt;/span&gt;
&lt;span class="nt"&gt;elixir_release_app_user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;foo&lt;/span&gt;

&lt;span class="c1"&gt;# OS user for building and deploying the code&lt;/span&gt;
&lt;span class="nt"&gt;elixir_release_deploy_user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;deploy&lt;/span&gt;

&lt;span class="c1"&gt;# Port that Phoenix listens on&lt;/span&gt;
&lt;span class="nt"&gt;elixir_release_http_listen_port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;4001&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;inventory/group_vars/build-servers/vars.yml&lt;/code&gt; file specifies the build settings.&lt;/p&gt;
&lt;p&gt;It specifies the project's git repo, which the Ansible playbook will check out
on the build server:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="c1"&gt;# App git repo&lt;/span&gt;
&lt;span class="nt"&gt;app_repo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;https://github.com/cogini/elixir-deploy-template&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Set up web server&lt;/h2&gt;
&lt;p&gt;Run the following Ansible commands from the &lt;code&gt;ansible&lt;/code&gt; dir in the project.&lt;/p&gt;
&lt;p&gt;Do initial server setup:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;root&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;web-servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/setup-web.yml&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In this command, &lt;code&gt;web-servers&lt;/code&gt; is the group of servers, but you could also
specify a specific host like &lt;code&gt;web-server&lt;/code&gt;. Ansible allows you to work on groups
of servers simultaneously. Configuration tasks are generally written to be
idempotent, so we can run the playbook against all our servers and it will make
whatever changes are needed to get them up to date.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;-u&lt;/code&gt; flag specifies which user account to use on the server. We have to use
root to do the initial bootstrap, but you should generally use your own user
account, assuming it has sudo. The &lt;code&gt;-v&lt;/code&gt; flag controls verbosity, you can add
more v's to get more debug info.  The &lt;code&gt;-D&lt;/code&gt; flag shows diffs of the changes
Ansible makes on the server. If you add &lt;code&gt;--check&lt;/code&gt; to the Ansible command, it
will show you the changes it is planning to do, but doesn't actually run them.
These scripts are safe to run in check mode, but may give an error during the play
if required OS packages are not installed.&lt;/p&gt;
&lt;p&gt;Set up the app (create dirs, etc.):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;root&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;web-servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/deploy-app.yml&lt;span class="w"&gt; &lt;/span&gt;--skip-tags&lt;span class="w"&gt; &lt;/span&gt;deploy&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Configure runtime secrets, setting the &lt;code&gt;$HOME/.erlang.cookie&lt;/code&gt; file and
generate a &lt;a href="https://github.com/bitwalker/conform"&gt;Conform&lt;/a&gt; config file at
&lt;code&gt;/etc/deploy-template/deploy_template.conf&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;root&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;web-servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/config-web.yml&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;For ease of getting started, this generates secrets on your local machine and
stores them in &lt;code&gt;/tmp&lt;/code&gt;.  See below for discussion about managing secrets.&lt;/p&gt;
&lt;p&gt;At this point, the web server is set up, but we need to build and deploy
the app code to it.&lt;/p&gt;
&lt;h2&gt;Set up build server&lt;/h2&gt;
&lt;p&gt;We assume this is the same as the web server, but it can be different.&lt;/p&gt;
&lt;p&gt;Set up the server:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;root&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;build-servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/setup-build.yml&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This sets up the build environment, e.g. install ASDF.&lt;/p&gt;
&lt;p&gt;Install PostgreSQL, assuming we are running the web app on the same server.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;root&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;build-servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/setup-db.yml&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Configure &lt;code&gt;config/prod.secret.exs&lt;/code&gt; on the build server:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;root&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;build-servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/config-build.yml&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Again, see below for discussion about managing secrets.&lt;/p&gt;
&lt;h2&gt;Build the app&lt;/h2&gt;
&lt;p&gt;Log into the &lt;code&gt;deploy&lt;/code&gt; user on the build machine:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ssh&lt;span class="w"&gt; &lt;/span&gt;-A&lt;span class="w"&gt; &lt;/span&gt;deploy@build-server
&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;~/build/deploy-template
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;-A&lt;/code&gt; flag on the ssh command gives the session on the server access to your
local ssh keys. If your local user can access a GitHub repo, then the server
can do it, without having to put keys on the server. Similarly, if your ssh key
is on the prod server, then you can push code from the build server using
Ansible without the web server needing to trust the build server.&lt;/p&gt;
&lt;p&gt;Build the production release:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;scripts/build-release.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That script runs:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Pulling latest code from git&amp;quot;&lt;/span&gt;
git&lt;span class="w"&gt; &lt;/span&gt;pull

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Updating versions of Erlang/Elixir/Node.js if necessary&amp;quot;&lt;/span&gt;
asdf&lt;span class="w"&gt; &lt;/span&gt;install
asdf&lt;span class="w"&gt; &lt;/span&gt;install

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Updating Elixir libs&amp;quot;&lt;/span&gt;
mix&lt;span class="w"&gt; &lt;/span&gt;local.hex&lt;span class="w"&gt; &lt;/span&gt;--if-missing&lt;span class="w"&gt; &lt;/span&gt;--force
mix&lt;span class="w"&gt; &lt;/span&gt;local.rebar&lt;span class="w"&gt; &lt;/span&gt;--if-missing&lt;span class="w"&gt; &lt;/span&gt;--force
mix&lt;span class="w"&gt; &lt;/span&gt;deps.get&lt;span class="w"&gt; &lt;/span&gt;--only&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$MIX_ENV&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Compiling&amp;quot;&lt;/span&gt;
mix&lt;span class="w"&gt; &lt;/span&gt;compile

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Updating node libraries&amp;quot;&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;assets&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;npm&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;node&lt;span class="w"&gt; &lt;/span&gt;node_modules/brunch/bin/brunch&lt;span class="w"&gt; &lt;/span&gt;build&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Building release&amp;quot;&lt;/span&gt;
mix&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;phx.digest,&lt;span class="w"&gt; &lt;/span&gt;release
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;asdf install&lt;/code&gt; builds Erlang from source, so the first time it runs it can take
a long time. If it fails due to a lost connection, delete
&lt;code&gt;/home/deploy/.asdf/installs/erlang/20.3&lt;/code&gt; and try again.
You may want to run it under &lt;code&gt;tmux&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Deploy the release locally&lt;/h2&gt;
&lt;p&gt;If you are building on the web web server, then you can use the custom mix
tasks in &lt;code&gt;lib/mix/tasks/deploy.ex&lt;/code&gt; to deploy locally.&lt;/p&gt;
&lt;p&gt;In &lt;code&gt;mix.exs&lt;/code&gt;, set &lt;code&gt;deploy_dir&lt;/code&gt; to match Ansible, i.e.
&lt;code&gt;deploy_dir: /opt/{{ org }}/{{ elixir_release_name }}&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="ss"&gt;deploy_dir&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;/opt/myorg/deploy-template/&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Deploy the release:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;scripts/deploy-local.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That script runs:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nv"&gt;MIX_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prod&lt;span class="w"&gt; &lt;/span&gt;mix&lt;span class="w"&gt; &lt;/span&gt;deploy.local
sudo&lt;span class="w"&gt; &lt;/span&gt;/bin/systemctl&lt;span class="w"&gt; &lt;/span&gt;restart&lt;span class="w"&gt; &lt;/span&gt;deploy-template
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The build is being done under the &lt;code&gt;deploy&lt;/code&gt; user, who owns the files under
&lt;code&gt;/opt/myorg/deploy-template&lt;/code&gt; and has a special &lt;code&gt;/etc/sudoers.d&lt;/code&gt; config which
allows it to run the &lt;code&gt;/bin/systemctl restart deploy-template&lt;/code&gt; command.&lt;/p&gt;
&lt;h3&gt;Verify it works&lt;/h3&gt;
&lt;p&gt;Make a request to the app supervised by systemd:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;curl&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;http://localhost:4001/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Have a look at the logs:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;systemctl&lt;span class="w"&gt; &lt;/span&gt;status&lt;span class="w"&gt; &lt;/span&gt;deploy-template
journalctl&lt;span class="w"&gt; &lt;/span&gt;-r&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;deploy-template
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Make a request to the machine over the network on port 80 through the magic of
&lt;a href="https://www.cogini.com/blog/port-forwarding-with-iptables/"&gt;iptables port forwarding&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can get a console on the running app by logging in as the &lt;code&gt;foo&lt;/code&gt; user the
app runs under and executing:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;/opt/myorg/deploy-template/scripts/remote_console.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can also &lt;a href="https://github.com/cogini/elixir-deploy-template#deploy-to-a-remote-machine-using-ansible"&gt;deploy to a remote machine using
Ansible&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Managing secrets with Ansible&lt;/h3&gt;
&lt;p&gt;Ansible has a &lt;a href="http://docs.ansible.com/ansible/2.5/user_guide/vault.html"&gt;vault&lt;/a&gt; function
which you can use to store keys. It automates the process of encrypting
variable data so you can check it into source control, so only people with the
password can read it.&lt;/p&gt;
&lt;p&gt;There are trade-offs in managing secrets.&lt;/p&gt;
&lt;p&gt;For a small team of devs who are also the admins, then you trust your
developers and your own dev machine with the secrets. It's better not to have
secrets in the build environment, though. You can push the prod secrets
directly from your dev machine to the web servers. If you are using a third-party
CI server, then that goes double. You don't want to give the CI service access
to your production keys.&lt;/p&gt;
&lt;p&gt;For secure applications like health care or finance, we need to tightly control
access to production systems. Ideally nobody would log into production systems,
and if they do, it should be logged. You can restrict vault password access to
your ops team, or use different keys for different environments.&lt;/p&gt;
&lt;p&gt;You can also set up a build/deploy server in the cloud which has access to the
keys and configure the production instances from it. When we run in an AWS auto
scaling group, we build an AMI with &lt;a href="https://www.packer.io/"&gt;Packer&lt;/a&gt; and
Ansible, putting the keys on it the same way. Even better, however, is to not
store keys on the server at all. Pull them when the app starts up, reading from
an S3 bucket or Amazon's KMS, with access controlled by IAM instance roles.&lt;/p&gt;
&lt;p&gt;The one thing that really needs to be there at startup is the Erlang cookie,
everything else we can pull at runtime. If we are not using the Erlang
distribution protocol, then we don't need to share it, it just needs to be
secure.&lt;/p&gt;
&lt;p&gt;The following shows describes how you can use the vault.&lt;/p&gt;
&lt;p&gt;Generate a vault password and put it in the file &lt;code&gt;ansible/vault.key&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;openssl&lt;span class="w"&gt; &lt;/span&gt;rand&lt;span class="w"&gt; &lt;/span&gt;-hex&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;16&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can specify the password when you are running a playbook with the
&lt;code&gt;--vault-password-file vault.key&lt;/code&gt; option, or you can make the vault password always
available by setting it in &lt;code&gt;ansible/ansible.cfg&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;vault_password_file = vault.key
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;ansible/inventory/group_vars/web-servers/secrets.yml&lt;/code&gt; file specifies deploy secrets.&lt;/p&gt;
&lt;p&gt;Generate a cookie for deployment and copy it into the &lt;code&gt;secrets.yml&lt;/code&gt; file:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;openssl&lt;span class="w"&gt; &lt;/span&gt;rand&lt;span class="w"&gt; &lt;/span&gt;-hex&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;32&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;ansible-vault&lt;span class="w"&gt; &lt;/span&gt;encrypt_string&lt;span class="w"&gt; &lt;/span&gt;--vault-id&lt;span class="w"&gt; &lt;/span&gt;vault.key&lt;span class="w"&gt; &lt;/span&gt;--stdin-name&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;erlang_cookie&amp;#39;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That generates encrypted data like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;erlang_cookie&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;!vault&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;|&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;$ANSIBLE_VAULT;1.1;AES256&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;64346139623638623838396261373265666363643264333664633965306465313864653033643530&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;3830366538366139353931323662373734353064303034660a326232343036646339623638346236&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;39623832656466356338373264623331363736636262393838323135663962633339303634353763&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;3935623562343131370a383439346166323832353232373933613363383435333037343231393830&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;35326662353662316339633732323335653332346465383030633333333638323735383666303264&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;35663335623061366536363134303061323861356331373334653363383961396330386136636661&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;63373230643163633465303933396336393531633035616335653234376666663935353838356135&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="no"&gt;36323866346139666462&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Generate &lt;code&gt;secret_key_base&lt;/code&gt; for the server the same way:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;openssl&lt;span class="w"&gt; &lt;/span&gt;rand&lt;span class="w"&gt; &lt;/span&gt;-base64&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;48&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;ansible-vault&lt;span class="w"&gt; &lt;/span&gt;encrypt_string&lt;span class="w"&gt; &lt;/span&gt;--vault-id&lt;span class="w"&gt; &lt;/span&gt;vault.key&lt;span class="w"&gt; &lt;/span&gt;--stdin-name&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;secret_key_base&amp;#39;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Generate &lt;code&gt;db_pass&lt;/code&gt; for the db user:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;openssl&lt;span class="w"&gt; &lt;/span&gt;rand&lt;span class="w"&gt; &lt;/span&gt;-hex&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;16&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;ansible-vault&lt;span class="w"&gt; &lt;/span&gt;encrypt_string&lt;span class="w"&gt; &lt;/span&gt;--vault-id&lt;span class="w"&gt; &lt;/span&gt;vault.key&lt;span class="w"&gt; &lt;/span&gt;--stdin-name&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;db_pass&amp;#39;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This playbook configures the production server, setting the
&lt;code&gt;$HOME/.erlang.cookie&lt;/code&gt; file on the web server and generates a Conform config file at
&lt;code&gt;/etc/deploy-template/deploy_template.conf&lt;/code&gt; with the other vars:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;--vault-password-file&lt;span class="w"&gt; &lt;/span&gt;vault.key&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$USER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;web-servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/config-web.yml&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This playbook configures &lt;code&gt;config/prod.secret.exs&lt;/code&gt; on the build server.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;--vault-password-file&lt;span class="w"&gt; &lt;/span&gt;vault.key&lt;span class="w"&gt; &lt;/span&gt;-u&lt;span class="w"&gt; &lt;/span&gt;root&lt;span class="w"&gt; &lt;/span&gt;-v&lt;span class="w"&gt; &lt;/span&gt;-l&lt;span class="w"&gt; &lt;/span&gt;build-servers&lt;span class="w"&gt; &lt;/span&gt;playbooks/config-build.yml&lt;span class="w"&gt; &lt;/span&gt;-D
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Database&lt;/h2&gt;
&lt;p&gt;Most apps use a database. The Ansible playbook &lt;code&gt;playbooks/setup-db.yml&lt;/code&gt; creates
the database for you.&lt;/p&gt;
&lt;p&gt;Whenever you change the db schema, you need to run migrations on the server.&lt;/p&gt;
&lt;p&gt;After building the release, but before deploying the code, update the db to
match the code:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;scripts/db-migrate.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That script runs:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nv"&gt;MIX_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prod&lt;span class="w"&gt; &lt;/span&gt;mix&lt;span class="w"&gt; &lt;/span&gt;ecto.migrate
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Surprisingly, the same process also works when we are deploying in an AWS cloud
environment. Create a build instance in the VPC private subnet which has
permissions to talk to the RDS database. Run the Ecto commands to migrate the
db, build the release, then do a Blue/Green deployment to the ASG using AWS
CodeDeploy.&lt;/p&gt;
&lt;h1&gt;Changes&lt;/h1&gt;
&lt;p&gt;Following are the steps used to set up this repo. You can do the same to add
it to your own project.&lt;/p&gt;
&lt;p&gt;It all began with a new Phoenix project:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;phx.new&lt;span class="w"&gt; &lt;/span&gt;deploy_template
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Set up release&lt;/h2&gt;
&lt;p&gt;Generate initial files in the &lt;code&gt;rel&lt;/code&gt; dir:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;release.init
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Modify &lt;code&gt;rel/config.exs&lt;/code&gt; and &lt;code&gt;vm.args.eex&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Set up ASDF&lt;/h2&gt;
&lt;p&gt;Add the &lt;code&gt;.tool-versions&lt;/code&gt; file to specify versions of Elixir and Erlang.&lt;/p&gt;
&lt;h2&gt;Configure for running in a release&lt;/h2&gt;
&lt;p&gt;Edit &lt;code&gt;config/prod.exs&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Uncomment this so Phoenix will run in a release:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:phoenix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:serve_endpoints&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Add Ansible&lt;/h2&gt;
&lt;p&gt;Add the Ansible tasks to set up the servers and deploy code, in the &lt;code&gt;ansible&lt;/code&gt;
directory. Configure the vars in the inventory.&lt;/p&gt;
&lt;p&gt;This repository contains local copies of roles from Ansible Galaxy in
&lt;code&gt;roles.galaxy&lt;/code&gt;. To install them, run:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-galaxy&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;--roles-path&lt;span class="w"&gt; &lt;/span&gt;roles.galaxy&lt;span class="w"&gt; &lt;/span&gt;-r&lt;span class="w"&gt; &lt;/span&gt;install_roles.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Add mix tasks for local deploy&lt;/h2&gt;
&lt;p&gt;Add &lt;code&gt;lib/mix/tasks/deploy.ex&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;Add Conform for configuration&lt;/h2&gt;
&lt;p&gt;Add &lt;a href="https://github.com/bitwalker/conform"&gt;Conform&lt;/a&gt; to &lt;code&gt;deps&lt;/code&gt; in &lt;code&gt;mix.exs&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:conform&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;~&amp;gt; 2.2&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Generate schema to the &lt;code&gt;config/deploy_template.schema.exs&lt;/code&gt; file.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nc"&gt;MIX_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;prod&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mix&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;conform&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;new&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Generate a sample &lt;code&gt;deploy_template.prod.conf&lt;/code&gt; file:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nc"&gt;MIX_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;prod&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mix&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;conform&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;configure&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Integrate with release by adding &lt;code&gt;plugin Conform.ReleasePlugin&lt;/code&gt;
to &lt;code&gt;rel/config.exs&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;release&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:deploy_template&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;current_version&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:deploy_template&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;applications&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;:runtime_tools&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;plugin&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Conform.ReleasePlugin&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="DevOps"/><category term="elixir"/><category term="phoenix"/></entry><entry><title>Benchmarking Phoenix on Digital Ocean</title><link href="https://www.cogini.com/blog/benchmarking-phoenix-on-digital-ocean/" rel="alternate"/><published>2018-05-18T00:00:00+08:00</published><updated>2018-05-18T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-18:/blog/benchmarking-phoenix-on-digital-ocean/</id><summary type="html">&lt;p&gt;Just for fun, I decided to benchmark the performance of the &lt;a href="https://github.com/cogini/elixir-deploy-template"&gt;elixir deploy template&lt;/a&gt; running on a $5/month &lt;a href="https://m.do.co/c/150575a88316"&gt;Digital Ocean&lt;/a&gt; Droplet.&lt;/p&gt;</summary><content type="html">&lt;p&gt;Just for fun, I decided to benchmark the performance of the &lt;a href="https://github.com/cogini/elixir-deploy-template"&gt;elixir deploy
template&lt;/a&gt; running on a
$5/month &lt;a href="https://m.do.co/c/150575a88316"&gt;Digital Ocean&lt;/a&gt; Droplet.&lt;/p&gt;
&lt;p&gt;Following &lt;a href="http://www.theerlangelist.com/article/phoenix_latency"&gt;Saša Jurić's post&lt;/a&gt;,
I &lt;a href="https://github.com/wg/wrk/wiki/Installing-Wrk-on-Linux"&gt;set up wrk&lt;/a&gt; on my Mac
and on some Digital Ocean instances in the same data center.&lt;/p&gt;
&lt;p&gt;I made a simple request function in &lt;code&gt;wrk.lua&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kr"&gt;function&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="n"&gt;wrk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;quot;GET&amp;quot;&lt;/span&gt;
  &lt;span class="kr"&gt;return&lt;/span&gt; &lt;span class="n"&gt;wrk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;&amp;quot;/&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kr"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is just reading the default Phoenix home page. There is no actual logic
performed, or more importantly, database calls.&lt;/p&gt;
&lt;p&gt;With zero tuning of Phoenix, from my Mac in Taiwan going to Digital Ocean in
Singapore, I got the following results:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;wrk&lt;span class="w"&gt; &lt;/span&gt;-t12&lt;span class="w"&gt; &lt;/span&gt;-c12&lt;span class="w"&gt; &lt;/span&gt;-d60s&lt;span class="w"&gt; &lt;/span&gt;--latency&lt;span class="w"&gt; &lt;/span&gt;-s&lt;span class="w"&gt; &lt;/span&gt;wrk.lua&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;http://159.89.197.173&amp;quot;&lt;/span&gt;
Running&lt;span class="w"&gt; &lt;/span&gt;1m&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;@&lt;span class="w"&gt; &lt;/span&gt;http://159.89.197.173
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="m"&gt;12&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;threads&lt;span class="w"&gt; &lt;/span&gt;and&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;12&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;connections
&lt;span class="w"&gt;  &lt;/span&gt;Thread&lt;span class="w"&gt; &lt;/span&gt;Stats&lt;span class="w"&gt;   &lt;/span&gt;Avg&lt;span class="w"&gt;      &lt;/span&gt;Stdev&lt;span class="w"&gt;     &lt;/span&gt;Max&lt;span class="w"&gt;   &lt;/span&gt;+/-&lt;span class="w"&gt; &lt;/span&gt;Stdev
&lt;span class="w"&gt;    &lt;/span&gt;Latency&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;73&lt;/span&gt;.06ms&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;78&lt;/span&gt;.84ms&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;.18s&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;96&lt;/span&gt;.43%
&lt;span class="w"&gt;    &lt;/span&gt;Req/Sec&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;15&lt;/span&gt;.63&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;.09&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;20&lt;/span&gt;.00&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;63&lt;/span&gt;.53%
&lt;span class="w"&gt;  &lt;/span&gt;Latency&lt;span class="w"&gt; &lt;/span&gt;Distribution
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;50&lt;/span&gt;%&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;60&lt;/span&gt;.49ms
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;75&lt;/span&gt;%&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;62&lt;/span&gt;.98ms
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;90&lt;/span&gt;%&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;72&lt;/span&gt;.32ms
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;99&lt;/span&gt;%&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="m"&gt;454&lt;/span&gt;.03ms
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="m"&gt;11099&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;requests&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;.00m,&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;24&lt;/span&gt;.06MB&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt;
Requests/sec:&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;184&lt;/span&gt;.77
Transfer/sec:&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;410&lt;/span&gt;.20KB
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The response time is all driven from the network latency:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ping&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173
PING&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173&lt;span class="o"&gt;)&lt;/span&gt;:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;56&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;data&lt;span class="w"&gt; &lt;/span&gt;bytes
&lt;span class="m"&gt;64&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;bytes&lt;span class="w"&gt; &lt;/span&gt;from&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;49&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;56&lt;/span&gt;.037&lt;span class="w"&gt; &lt;/span&gt;ms
&lt;span class="m"&gt;64&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;bytes&lt;span class="w"&gt; &lt;/span&gt;from&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;49&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;55&lt;/span&gt;.475&lt;span class="w"&gt; &lt;/span&gt;ms
&lt;span class="m"&gt;64&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;bytes&lt;span class="w"&gt; &lt;/span&gt;from&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;49&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;55&lt;/span&gt;.455&lt;span class="w"&gt; &lt;/span&gt;ms
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The actual processing time of Phoenix is in the microsecond range:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;May&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;07&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;elixir&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;template&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;template&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;29275&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;07&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;05.777&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;request_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="n"&gt;d6bcb89q0jv834pu7d39fmaaq828opg&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GET&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="n"&gt;May&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;07&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;elixir&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;template&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;template&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;29275&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;07&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;05.777&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;request_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="n"&gt;d6bcb89q0jv834pu7d39fmaaq828opg&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Sent&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ow"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;142&lt;/span&gt;&lt;span class="n"&gt;µs&lt;/span&gt;
&lt;span class="n"&gt;May&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;07&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;elixir&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;template&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;template&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;29275&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;07&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;05.781&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;request_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;mbrp97u5btbvike3057ho7h4ao9j4jfd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;GET&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="n"&gt;May&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;07&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;elixir&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;template&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;deploy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;template&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;29275&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;07&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;05.781&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;request_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;mbrp97u5btbvike3057ho7h4ao9j4jfd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Sent&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ow"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;251&lt;/span&gt;&lt;span class="n"&gt;µs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The machine itself is not working hard at all. The worst case latency is driven
by occasional network glitches, e.g. lost packets.&lt;/p&gt;
&lt;p&gt;I did a bit of tuning, changing the log level so that we are not writing each
request to the log twice, and also changing the &lt;code&gt;max_keepalive&lt;/code&gt; so that we run
multiple requests on the same connection. Reusing connections is realistic
if you have users doing multiple things on your site. It mainly affects
the max latency stats as compared to the average latency.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;git&lt;span class="w"&gt; &lt;/span&gt;diff
diff&lt;span class="w"&gt; &lt;/span&gt;--git&lt;span class="w"&gt; &lt;/span&gt;a/config/prod.exs&lt;span class="w"&gt; &lt;/span&gt;b/config/prod.exs
index&lt;span class="w"&gt; &lt;/span&gt;1cf2f5a..7acf2d7&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;100644&lt;/span&gt;
---&lt;span class="w"&gt; &lt;/span&gt;a/config/prod.exs
+++&lt;span class="w"&gt; &lt;/span&gt;b/config/prod.exs
@@&lt;span class="w"&gt; &lt;/span&gt;-14,12&lt;span class="w"&gt; &lt;/span&gt;+14,16&lt;span class="w"&gt; &lt;/span&gt;@@&lt;span class="w"&gt; &lt;/span&gt;use&lt;span class="w"&gt; &lt;/span&gt;Mix.Config
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c1"&gt;# manifest is generated by the mix phx.digest task&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c1"&gt;# which you typically run after static files are built.&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;config&lt;span class="w"&gt; &lt;/span&gt;:deploy_template,&lt;span class="w"&gt; &lt;/span&gt;DeployTemplateWeb.Endpoint,
+&lt;span class="w"&gt;  &lt;/span&gt;http:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;port:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;4001&lt;/span&gt;,
+&lt;span class="w"&gt;    &lt;/span&gt;protocol_options:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;max_keepalive:&lt;span class="w"&gt; &lt;/span&gt;5_000_000&lt;span class="o"&gt;]&lt;/span&gt;
+&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;,
&lt;span class="w"&gt;   &lt;/span&gt;load_from_system_env:&lt;span class="w"&gt; &lt;/span&gt;true,
&lt;span class="w"&gt;   &lt;/span&gt;url:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;host:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;example.com&amp;quot;&lt;/span&gt;,&lt;span class="w"&gt; &lt;/span&gt;port:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;80&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;,
&lt;span class="w"&gt;   &lt;/span&gt;cache_static_manifest:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;priv/static/cache_manifest.json&amp;quot;&lt;/span&gt;

&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c1"&gt;# Do not print debug messages in production&lt;/span&gt;
-config&lt;span class="w"&gt; &lt;/span&gt;:logger,&lt;span class="w"&gt; &lt;/span&gt;level:&lt;span class="w"&gt; &lt;/span&gt;:info
+#config&lt;span class="w"&gt; &lt;/span&gt;:logger,&lt;span class="w"&gt; &lt;/span&gt;level:&lt;span class="w"&gt; &lt;/span&gt;:info
+config&lt;span class="w"&gt; &lt;/span&gt;:logger,&lt;span class="w"&gt; &lt;/span&gt;level:&lt;span class="w"&gt; &lt;/span&gt;:warn
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Cranking up the concurrency makes things more interesting:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;wrk&lt;span class="w"&gt; &lt;/span&gt;-t200&lt;span class="w"&gt; &lt;/span&gt;-c200&lt;span class="w"&gt; &lt;/span&gt;-d60s&lt;span class="w"&gt; &lt;/span&gt;--latency&lt;span class="w"&gt; &lt;/span&gt;-s&lt;span class="w"&gt; &lt;/span&gt;wrk.lua&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;http://159.89.197.173&amp;quot;&lt;/span&gt;
Running&lt;span class="w"&gt; &lt;/span&gt;1m&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;@&lt;span class="w"&gt; &lt;/span&gt;http://159.89.197.173
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="m"&gt;200&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;threads&lt;span class="w"&gt; &lt;/span&gt;and&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;200&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;connections
&lt;span class="w"&gt;  &lt;/span&gt;Thread&lt;span class="w"&gt; &lt;/span&gt;Stats&lt;span class="w"&gt;   &lt;/span&gt;Avg&lt;span class="w"&gt;      &lt;/span&gt;Stdev&lt;span class="w"&gt;     &lt;/span&gt;Max&lt;span class="w"&gt;   &lt;/span&gt;+/-&lt;span class="w"&gt; &lt;/span&gt;Stdev
&lt;span class="w"&gt;    &lt;/span&gt;Latency&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;90&lt;/span&gt;.91ms&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="m"&gt;106&lt;/span&gt;.45ms&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;.12s&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;95&lt;/span&gt;.02%
&lt;span class="w"&gt;    &lt;/span&gt;Req/Sec&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;14&lt;/span&gt;.06&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;.31&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;20&lt;/span&gt;.00&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;73&lt;/span&gt;.44%
&lt;span class="w"&gt;  &lt;/span&gt;Latency&lt;span class="w"&gt; &lt;/span&gt;Distribution
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;50&lt;/span&gt;%&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;67&lt;/span&gt;.35ms
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;75&lt;/span&gt;%&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;73&lt;/span&gt;.49ms
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;90&lt;/span&gt;%&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;83&lt;/span&gt;.67ms
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;99&lt;/span&gt;%&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="m"&gt;681&lt;/span&gt;.16ms
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="m"&gt;162932&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;requests&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;.00m,&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;353&lt;/span&gt;.22MB&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt;
Requests/sec:&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;2710&lt;/span&gt;.91
Transfer/sec:&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;.88MB
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now we are actually making the server work, and it's CPU bound during the run.&lt;/p&gt;
&lt;p&gt;Next I ran the same test from another Droplet in the same data center:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;wrk&lt;span class="w"&gt; &lt;/span&gt;-t12&lt;span class="w"&gt; &lt;/span&gt;-c12&lt;span class="w"&gt; &lt;/span&gt;-d60s&lt;span class="w"&gt; &lt;/span&gt;--latency&lt;span class="w"&gt; &lt;/span&gt;-s&lt;span class="w"&gt; &lt;/span&gt;wrk.lua&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;http://159.89.197.173&amp;quot;&lt;/span&gt;
Running&lt;span class="w"&gt; &lt;/span&gt;1m&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;@&lt;span class="w"&gt; &lt;/span&gt;http://159.89.197.173
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="m"&gt;12&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;threads&lt;span class="w"&gt; &lt;/span&gt;and&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;12&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;connections
&lt;span class="w"&gt;  &lt;/span&gt;Thread&lt;span class="w"&gt; &lt;/span&gt;Stats&lt;span class="w"&gt;   &lt;/span&gt;Avg&lt;span class="w"&gt;      &lt;/span&gt;Stdev&lt;span class="w"&gt;     &lt;/span&gt;Max&lt;span class="w"&gt;   &lt;/span&gt;+/-&lt;span class="w"&gt; &lt;/span&gt;Stdev
&lt;span class="w"&gt;    &lt;/span&gt;Latency&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;.54ms&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;.91ms&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="m"&gt;21&lt;/span&gt;.62ms&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;82&lt;/span&gt;.76%
&lt;span class="w"&gt;    &lt;/span&gt;Req/Sec&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;284&lt;/span&gt;.03&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;29&lt;/span&gt;.73&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;414&lt;/span&gt;.00&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;74&lt;/span&gt;.49%
&lt;span class="w"&gt;  &lt;/span&gt;Latency&lt;span class="w"&gt; &lt;/span&gt;Distribution
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;50&lt;/span&gt;%&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;.35ms
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;75&lt;/span&gt;%&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;.95ms
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;90&lt;/span&gt;%&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt;.48ms
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="m"&gt;99&lt;/span&gt;%&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="m"&gt;6&lt;/span&gt;.53ms
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="m"&gt;203770&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;requests&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;.00m,&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;441&lt;/span&gt;.75MB&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt;
Requests/sec:&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="m"&gt;3393&lt;/span&gt;.59
Transfer/sec:&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="m"&gt;7&lt;/span&gt;.36MB
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ping&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173
PING&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;56&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="m"&gt;84&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;bytes&lt;span class="w"&gt; &lt;/span&gt;of&lt;span class="w"&gt; &lt;/span&gt;data.
&lt;span class="m"&gt;64&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;bytes&lt;span class="w"&gt; &lt;/span&gt;from&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;61&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;.36&lt;span class="w"&gt; &lt;/span&gt;ms
&lt;span class="m"&gt;64&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;bytes&lt;span class="w"&gt; &lt;/span&gt;from&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;61&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;.439&lt;span class="w"&gt; &lt;/span&gt;ms
&lt;span class="m"&gt;64&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;bytes&lt;span class="w"&gt; &lt;/span&gt;from&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;61&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;.431&lt;span class="w"&gt; &lt;/span&gt;ms
&lt;span class="m"&gt;64&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;bytes&lt;span class="w"&gt; &lt;/span&gt;from&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;159&lt;/span&gt;.89.197.173:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;61&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;.457&lt;span class="w"&gt; &lt;/span&gt;ms
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;On the whole, it's quite impressive performance for such a cheap instance.&lt;/p&gt;</content><category term="DevOps"/><category term="elixir"/><category term="phoenix"/><category term="performance"/><category term="deployment"/></entry><entry><title>SaaS pricing: users are not all the same</title><link href="https://www.cogini.com/blog/saas-pricing-users-are-not-all-the-same/" rel="alternate"/><published>2018-05-18T00:00:00+08:00</published><updated>2018-05-18T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-18:/blog/saas-pricing-users-are-not-all-the-same/</id><summary type="html">&lt;p&gt;It's popular these days to use hosted applications instead of running your own
infrastructure. It's frustrating as a customer, though, when the pricing model is
not sophisticated enough to match your actual usage.&lt;/p&gt;
&lt;p&gt;In a SaaS product, your pricing should scale with the value the customer
gets from the product …&lt;/p&gt;</summary><content type="html">&lt;p&gt;It's popular these days to use hosted applications instead of running your own
infrastructure. It's frustrating as a customer, though, when the pricing model is
not sophisticated enough to match your actual usage.&lt;/p&gt;
&lt;p&gt;In a SaaS product, your pricing should scale with the value the customer
gets from the product. You want to charge big customers more while keeping it
affordable for small customers, and you need to enable early stages of usage in
big companies such as pilots or "viral" adoption.&lt;/p&gt;
&lt;p&gt;If the value scales with the number of users, then charging per user makes
sense. If you are making an app to help manage fleet of vehicles, then charging
per vehicle may make more sense.&lt;/p&gt;
&lt;p&gt;You also want natural "price breaks" which separate your entry level users from
the features in the "pro" or "enterprise" plans. People who have 50 users
have different problems than ones that have five, and you can charge them more
if you are providing more total value.&lt;/p&gt;
&lt;h2&gt;Users are not all the same&lt;/h2&gt;
&lt;p&gt;There is a SaaS industry trend towards per-user pricing.&lt;/p&gt;
&lt;p&gt;The fundamental problem comes when you have users within a customer account
which are getting different amounts of value from your product. There is also a
danger of alienating the small customers and technical users who have the
potential to give them viral growth.&lt;/p&gt;
&lt;p&gt;For example, the GitHub code hosting and collaboration platform announced a
change in their pricing model that makes things difficult for consulting
companies like ours. Instead of charging for the number of projects
hosted, they charge per user per month. If the users are all active developers,
then customers are getting value from the platform no matter how many projects
there are, and it scales naturally with the size of the customer's organization.&lt;/p&gt;
&lt;p&gt;The new change, however, increases our costs significantly without increasing
value because we have inactive users. If we host a client's project in our
GitHub organization, then we need accounts for our developers and one or two
for client staff. When we are actively developing the project, then that's
fine. If the project launches and switches to a maintenance phase, then we
end up paying too much.&lt;/p&gt;
&lt;p&gt;If the project is not active, paying for our developers is fine, as they will
be working on other projects, but we are also paying for the inactive client
accounts. If we take away client access to their source code, then they
(reasonably) freak out. If we archive the repo, then it's same thing, and it's
an administrative hassle for maintenance.&lt;/p&gt;
&lt;p&gt;If the client has their own organization, then it's worse. They pay for their
own inactive users and also for our inactive users. So they can remove our
users and save the money, but that's a pain if we need to do some maintenance.
When you don't have a lot of activity, it's like every bug fix costs $10 vig to
GitHub to add the user for a month.&lt;/p&gt;
&lt;p&gt;We see similar issues with production incident management products. I am ok
with paying $50/month per on-call ops person, as we are getting value from the
product. I don't like paying $50/month per developer who might potentially need
to deal with a problem that gets escalated, or a product owner who might need
to see what the status is. These users do not have the same level of
interaction with the product.&lt;/p&gt;
&lt;p&gt;Failure to get this right can result in people setting up their system to
minimize costs rather than using it "properly." For security, users should have
their own logins with roles that allow them to do their job. A sign that your
pricing model has problems is when you see people sharing sharing account
logins. For example, if you use GitHub's issue tracker, then anyone in your
company might need to be able to create a bug report. Paying $10/user/month
for that makes no sense, so we end up with a shared account. That's inconvenient,
though, e.g. users can't get emails notifying them when the developer responds.&lt;/p&gt;
&lt;p&gt;In your SaaS product, you may separate the management of "billable" resources,
giving the owner control over how much money is being spent. For example,
the owner can create resources that cost money each month, while the primary
users manage them.&lt;/p&gt;
&lt;p&gt;Sometimes the SaaS pricing seems like its own bubble world and we forget that
there are other ways of doing the same thing. If I was to buy a little time
tracking app that ran on my computer, I would expect to pay say $19.95, not
$120 per year. For a while we used &lt;a href="https://www.getharvest.com/"&gt;Harvest&lt;/a&gt;
for time tracking, but it was $12 per user per month. The ROI on us making our
own solution was a month or two, and we got to integrate it with processes like
invoicing and accounting to make it work better for us.&lt;/p&gt;</content><category term="Products"/><category term="saas"/><category term="pricing"/></entry><entry><title>Avoiding GenServer bottlenecks</title><link href="https://www.cogini.com/blog/avoiding-genserver-bottlenecks/" rel="alternate"/><published>2018-05-17T00:00:00+08:00</published><updated>2018-05-17T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-17:/blog/avoiding-genserver-bottlenecks/</id><summary type="html">&lt;p&gt;GenServers are the standard way to create services in Elixir. They are very
useful, but when used incorrectly they can cause unnecessary problems. This is
particularly an issue for developers coming from object oriented languages, who
attempt to treat GenServers as object instances. Instead we should think in
functional terms …&lt;/p&gt;</summary><content type="html">&lt;p&gt;GenServers are the standard way to create services in Elixir. They are very
useful, but when used incorrectly they can cause unnecessary problems. This is
particularly an issue for developers coming from object oriented languages, who
attempt to treat GenServers as object instances. Instead we should think in
functional terms of data and transformation, shared state and concurrent access.&lt;/p&gt;
&lt;p&gt;At it's heart, a GenServer is a separate process that receives a message, does
some work, updates process state, then sends back a response. If that matches
your problem, great. It's important to recognize, though, that a GenServer only
handles one request at a time, and they can become a bottleneck for your
system. The Erlang system has other tools available which may work better.&lt;/p&gt;
&lt;p&gt;Following are some examples of how GenServers became bottlenecks in high volume
systems, and how we resolved them.&lt;/p&gt;
&lt;h1&gt;Example: Geoip lookups&lt;/h1&gt;
&lt;p&gt;In a web application we needed to determine which country the request was
coming from based on the IP address. &lt;a href="https://www.maxmind.com/"&gt;MaxMind&lt;/a&gt; has
various databases related to IP addresses. They are a binary file format
which supports efficient querying by network prefix.&lt;/p&gt;
&lt;p&gt;The database file we were using is about 65MB in size. Rather than read the
data from disk on every request, our initial design was to put it in a
GenServer. When the application starts, the GenServer loads the data file into
its state. After that, the processes handling HTTP requests send it a "call"
request with the IP address. It loads the data from state, looks up the country
and returns it to the caller.&lt;/p&gt;
&lt;p&gt;That worked fine for a while, but at a certain point, we started getting
timeouts. A GenServer only handles a single request at a time, so it had become
the bottleneck. We were effectively forcing all the requests in the system to
line up and go through the GenServer process one by one.&lt;/p&gt;
&lt;p&gt;To avoid that, we switched to a process pool. Using the
&lt;a href="https://github.com/erlware/episcina"&gt;Episcina&lt;/a&gt; library, we ran multiple
instances of the GenServer. The process handling a request would check out a
server from the pool, call the server to get the data, then put the process back
in the pool.&lt;/p&gt;
&lt;p&gt;That worked for a while, but eventually the lookups became the bottleneck for
the system again. We added more and more servers to the queue, but it didn't
help. At first we thought it was the queue manager, as it was a GenServer, too.
We needed to send multiple messages: one to check out the GenServer, one to run
the request, then another check it back in. The message passing is actually
quite fast, though.&lt;/p&gt;
&lt;p&gt;The bigger issue was how many processes we should have in the pool and how to
manage them. Our peak load was driven by traffic spikes, particularly DDOS
attacks. We would get sustained traffic of 5-10K requests per second, with
spikes above that.&lt;/p&gt;
&lt;p&gt;If the pool starts with a small number of processes, then there is a delay
launching new processes as we read the data from the disk.  Sometimes we would
launch hundreds of processes at once in response to demand, all of them
fighting for the same disk. If we pre-loaded lots of processes, then we would
use a lot of RAM, and our startup time was poor.&lt;/p&gt;
&lt;p&gt;The solution to this, like a lot of Elixir performance issues, was to use ETS.
ETS stands for "Erlang Term Storage." It is an in-memory key/value database
built into the Erlang virtual machine and highly optimized for concurrent
access between multiple processes. It works on Erlang "terms," i.e. data
structures, so there is no serialization overhead. Lookup times in ETS are less
than one microsecond, which makes them 1000 times faster than something like
Redis.&lt;/p&gt;
&lt;p&gt;On startup, we load the geoip data into an ETS table. Then, in the process that
handles the HTTP request, we load the data from the ETS table and do the lookup
on the data blob. You might think that would be inefficient due to copying data
around, but the Erlang virtual machine has optimized the process of sharing
binary data. If a binary is larger than 64 bytes, it gets stored in a shared
binary heap. In fact, we are just passing around a reference to the binary
data between ETS and the process. This is a case where immutable data is a big
win.&lt;/p&gt;
&lt;p&gt;After this optimization, our worst case geoip lookups were taking five
microseconds, and our memory usage dropped a lot. That was pretty good, but
when we are under DDOS attack, we may get a lot of requests from the same IPs.
We added a second ETS table to cache the results of the lookup, getting the
time to less than one microsecond.&lt;/p&gt;
&lt;h1&gt;Principles&lt;/h1&gt;
&lt;p&gt;This is a good example of the principle of "model the natural concurrency of
your application." We had created a lot of processes to manage the geoip data
and lookups, and we had overhead talking to them. The number of processes was
different from the number of requests.&lt;/p&gt;
&lt;p&gt;For each HTTP request, we have a &lt;a href="https://github.com/ninenines/cowboy"&gt;Cowboy&lt;/a&gt;
process that does the work, then goes back into a pool. The right answer was to
do all the work associated with the request in this process. We don't
have the overhead and latency of dealing with the queue manager or sending
messages to the GenServer.&lt;/p&gt;
&lt;p&gt;Another principle is that we should restrict load at the edge of the system.
If we don't have enough resources to handle a request, we should reject it
rather than overloading the bottleneck and making the system fail (see below).
When we use the HTTP thread, it's possible (though not required) in Cowboy to
limit the number of acceptor processes. So if we can handle 1000 requests per
second, we can limit it at the HTTP layer, causing the requests to be queued by
the kernel in the TCP/IP layer. That in turn gives backpressure to clients of the system.&lt;/p&gt;
&lt;p&gt;One common case where we really do need to limit concurrent access is when we
are talking to a database like PostgreSQL. The database works best with a
relatively small number of simultaneous requests, any more causes problems with
locking. The fundamental bottleneck in the system is concurrency of the db.
Once again, ETS can be a solution by caching db results that don't change.&lt;/p&gt;
&lt;h1&gt;Example: logging&lt;/h1&gt;
&lt;p&gt;In a real time bidding system we needed to write a transaction log for each
request for accounting purposes. This is not a traditional text error/debug
log, it is a CSV file.&lt;/p&gt;
&lt;p&gt;We originally implemented this as an Erlang
&lt;a href="http://erlang.org/doc/man/gen_event.html"&gt;gen_event&lt;/a&gt; handler. Under the hood,
though, these handlers are GenServers.&lt;/p&gt;
&lt;p&gt;The event handler received events from multiple HTTP request processes,
formatting them and writing them to a log file in an orderly way.  This makes
sense, as having multiple processes independently opening and writing to log
files would cause a lot of conflict. The problem is that the GenServer once
again became the bottleneck for the whole system. We were making all requests
line up to go through the GenServer one by one. It got overloaded and timed out
as disk I/O became an issue under load.&lt;/p&gt;
&lt;p&gt;We could have played the same game of splitting things up into multiple
GenServers. Instead, we followed a rule of Erlang: "Ericsson probably ran
into this problem at British Telecom 20 years ago and solved it." So we went
looking into the Erlang libs and found &lt;a href="http://erlang.org/doc/man/disk_log.html"&gt;disk_log&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;disk_log&lt;/code&gt; is very full featured, designed for exactly this situation. Telecom
systems produce Call Detail Records, CDRs. Every time a switch touches a
call, it records the information about who called whom and how long they
talked. It then sends the records to a central server for "mediation," where
they calculate the bill from the various pieces.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;disk_log&lt;/code&gt; can handle 100K writes per second, using low level Erlang I/O
features to support current writes from multiple processes. It supports error
handling, log rotation, and reading back from logs. It's great, but the docs
are limited, all you have is the man page. I am sure Ericsson has lots of
examples from their products, but those are not open source. So we needed to
find some examples and make some prototypes, but it solved the problem the
right way.&lt;/p&gt;
&lt;p&gt;This is one of the things I love about Erlang. The platform is very mature and
has good solutions for the real problems we have. It's not magic, the laws of
physics still apply, but it is a lot better than starting from scratch
as we would with systems like Golang or Node.js.&lt;/p&gt;
&lt;h1&gt;Backpressure and load&lt;/h1&gt;
&lt;p&gt;One big problem with &lt;code&gt;gen_event&lt;/code&gt; is that it doesn't have a mechanism for
backpressure. It's really designed for low message volumes, and is now
deprecated in Elixir. Instead you should generally use
&lt;a href="https://github.com/elixir-lang/gen_stage"&gt;GenStage&lt;/a&gt;, which uses a pull
model to avoid overload.&lt;/p&gt;
&lt;p&gt;Lack of backpressure is a fundamental issue with the way Erlang process
mailboxes work. If you send more data to a GenServer than it can handle, the
mailbox will fill up, and eventually you will get a timeout. If you are having
performance problems, look for processes with overloaded mailboxes and deal
with them.&lt;/p&gt;
&lt;p&gt;You may wonder why the Erlang system hasn't fixed this. The reason is that the
current mechanism has low overhead. If we had to acknowledge every message, it
would double the message load on the system. It also fits with the unreliable
nature of real world systems. If we send a message and don't get a response
back, then we try again. That handles messages that get lost due to network
problems, crashes and overload with the same mechanism.&lt;/p&gt;
&lt;p&gt;Another reason is that telecom systems are sold according to the amount of load
they can handle. As part of the product development process, they identify what
the bottlenecks are, then they limit the inbound load to what the system can
handle, rejecting anything beyond that. If you have more, you need to buy
another telephone switch. If a process mailbox is filling up in production, it
means you have a bug or some other resource problem, e.g. failing hardware.&lt;/p&gt;
&lt;p&gt;With systems connected to the Internet, we can't control our load, but we can
be smart about how we deal with it. Have a look at &lt;a href="https://github.com/uwiger/jobs"&gt;Ulf Wiger's jobs
framework&lt;/a&gt; and his &lt;a href="https://www.youtube.com/watch?v=9IymY8HYuyc"&gt;presentation at the Erlang
User Conference&lt;/a&gt; for more info on
load regulation. (Another rule of Erlang is "pay attention to anything Ulf
Wiger does.")&lt;/p&gt;
&lt;p&gt;The standard Elixir &lt;a href="https://hexdocs.pm/logger/master/Logger.html"&gt;Logger&lt;/a&gt;
framework is based on &lt;code&gt;gen_event&lt;/code&gt; and has a similar issue. It monitors its own
mailbox and if it gets too full, it drops messages and applies backpressure by
switching to "synchronous" mode. That makes things slower, though, so it
can cause your system to crash, basically kicking it when it's down. If you are
limiting load at the edge, you could use the mailbox size of your bottleneck as
part of your load check when determining whether you accept requests.&lt;/p&gt;
&lt;p&gt;There is some more discussion about logging in &lt;a href="https://www.cogini.com/blog/presentation-on-elixir-performance/"&gt;this performance tuning
presentation&lt;/a&gt;&lt;/p&gt;</content><category term="Development"/><category term="elixir"/><category term="erlang"/><category term="performance"/><category term="logging"/><category term="ets"/><category term="architecture"/></entry><entry><title>Database migrations in the cloud</title><link href="https://www.cogini.com/blog/database-migrations-in-the-cloud/" rel="alternate"/><published>2018-05-16T00:00:00+08:00</published><updated>2018-05-16T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-16:/blog/database-migrations-in-the-cloud/</id><summary type="html">&lt;p&gt;&lt;a href="https://hexdocs.pm/phoenix/ecto.html"&gt;Database migrations&lt;/a&gt; are used to
automatically keep the database in sync with the code that uses it.
Elixir apps should be deployed as &lt;a href="https://www.cogini.com/blog/best-practices-for-deploying-elixir-apps/"&gt;releases, supervised by
systemd&lt;/a&gt;.
Here is an &lt;a href="https://github.com/cogini/elixir-deploy-template#database-migrations"&gt;example of how to run migrations when deploying Elixir
releases&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It's tempting to automatically run database migrations when the app …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;a href="https://hexdocs.pm/phoenix/ecto.html"&gt;Database migrations&lt;/a&gt; are used to
automatically keep the database in sync with the code that uses it.
Elixir apps should be deployed as &lt;a href="https://www.cogini.com/blog/best-practices-for-deploying-elixir-apps/"&gt;releases, supervised by
systemd&lt;/a&gt;.
Here is an &lt;a href="https://github.com/cogini/elixir-deploy-template#database-migrations"&gt;example of how to run migrations when deploying Elixir
releases&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It's tempting to automatically run database migrations when the app starts up.
Once we get into more complex deployment scenarios, however, that can cause
more problems than it solves. It's better to run migrations separately.&lt;/p&gt;
&lt;p&gt;For production systems, it's important to be able to roll back code in case of
problems. It's a lot easier to roll back code than data, though.&lt;/p&gt;
&lt;p&gt;Whenever possible we make all our database changes work with old and new code
releases.  For example, we first run a db change which adds a column to a db,
make sure that it's working properly with the old code, then deploy new code
which uses the new column.  If we need to roll back, then we know that the old
code will still work with the new db schema.&lt;/p&gt;
&lt;p&gt;We may also be running code in AWS in an Auto Scaling Group, deploying via a
Blue/Green process with AWS CodeDeploy. This means that the db might be
shared between old and new code at runtime as we upgrade the cluster.&lt;/p&gt;
&lt;p&gt;Our normal process is to run the db migrations from a "Operations and
Maintenance" instance in the deploy environment, e.g. the build server.  In the
test environment, we check out the code and run migrations against the db. We
verify that it's working properly, then we deploy the new code and verify it
works. Then we run the db updates in the production environment and deploy the
code.&lt;/p&gt;</content><category term="DevOps"/><category term="elixir"/><category term="phoenix"/><category term="db migrations"/><category term="cloud"/><category term="deployment"/></entry><entry><title>Deploying Elixir apps without sudo</title><link href="https://www.cogini.com/blog/deploying-elixir-apps-without-sudo/" rel="alternate"/><published>2018-05-16T00:00:00+08:00</published><updated>2018-05-16T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-16:/blog/deploying-elixir-apps-without-sudo/</id><summary type="html">&lt;p&gt;We normally &lt;a href="https://www.cogini.com/blog/best-practices-for-deploying-elixir-apps/"&gt;deploy Elixir apps as releases, supervised by
systemd&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After we have deployed the new release, we restart the app to make it live:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;/bin/systemctl&lt;span class="w"&gt; &lt;/span&gt;restart&lt;span class="w"&gt; &lt;/span&gt;foo
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The user account needs sufficient permissions to restart the app, though.
Instead of giving the deploy account full sudo permissions …&lt;/p&gt;</summary><content type="html">&lt;p&gt;We normally &lt;a href="https://www.cogini.com/blog/best-practices-for-deploying-elixir-apps/"&gt;deploy Elixir apps as releases, supervised by
systemd&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After we have deployed the new release, we restart the app to make it live:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;/bin/systemctl&lt;span class="w"&gt; &lt;/span&gt;restart&lt;span class="w"&gt; &lt;/span&gt;foo
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The user account needs sufficient permissions to restart the app, though.
Instead of giving the deploy account full sudo permissions, you can make a
user-specific sudo config file which specifies what commands it can run,
e.g. &lt;code&gt;/etc/sudoers.d/deploy-foo&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;deploy ALL=(ALL) NOPASSWD: /bin/systemctl start foo, /bin/systemctl stop foo, /bin/systemctl restart foo
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That works ok, but it would be better if we didn't require sudo permissions at
all. One option is to take advantage of the supervision provided by systemd to
restart the app.&lt;/p&gt;
&lt;p&gt;When we deploy a new release, the deploy user uploads the new code, sets up the
symlink, then touches a flag file. Systemd notices and restarts the app.&lt;/p&gt;
&lt;p&gt;See &lt;a href="https://github.com/cogini/mix_deploy#restarting"&gt;mix_deploy&lt;/a&gt; for examples.&lt;/p&gt;</content><category term="DevOps"/><category term="elixir"/><category term="phoenix"/><category term="deployment"/></entry><entry><title>Getting the client public IP address in Phoenix</title><link href="https://www.cogini.com/blog/getting-the-client-public-ip-address-in-phoenix/" rel="alternate"/><published>2018-05-16T00:00:00+08:00</published><updated>2018-05-16T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-16:/blog/getting-the-client-public-ip-address-in-phoenix/</id><summary type="html">&lt;p&gt;When your app is running behind a proxy like Nginx or a CDN, then the requests will all look like they are coming from the proxy. Use the &lt;code&gt;X-Forwarded-For&lt;/code&gt; header to set the &lt;code&gt;remote_ip&lt;/code&gt; correctly.&lt;/p&gt;</summary><content type="html">&lt;p&gt;When your app is running behind a proxy like Nginx, then the request will look
like it's coming from Nginx, i.e. the IP will be &lt;code&gt;127.0.0.1&lt;/code&gt;. Similarly, If
Nginx is behind a CDN, then all the requests will come from the IP of the CDN.&lt;/p&gt;
&lt;p&gt;In order to log the request properly and make decisions like rate limiting, we
need to get the IP address from a header set by the CDN.&lt;/p&gt;
&lt;p&gt;As described in the &lt;a href="https://hexdocs.pm/plug/Plug.Conn.html"&gt;Plug docs&lt;/a&gt;, we are expected
to overwrite the &lt;code&gt;remote_ip&lt;/code&gt; field in the Conn.&lt;/p&gt;
&lt;p&gt;Following is a plug that reads the &lt;code&gt;X-Forwarded-For&lt;/code&gt; HTTP header. You can do something
similar with HAProxy’s &lt;code&gt;PROXY&lt;/code&gt; protocol.&lt;/p&gt;
&lt;p&gt;Add it to your app's Endpoint:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;plug&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;MyApp.Plug.PublicIp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="kd"&gt;defmodule&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;MyApp.Plug.PublicIp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@moduledoc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Get public IP address of request from x-forwarded-for header&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@behaviour&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Plug&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:my_app&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;opts&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(%{&lt;/span&gt;&lt;span class="ss"&gt;assigns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;%{&lt;/span&gt;&lt;span class="ss"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;_opts&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;_opts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Plug.Conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_req_header&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;x-forwarded-for&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nc"&gt;Plug.Conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;to_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:inet&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ntoa&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;get_peer_ip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;))))&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;def&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;vals&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;@app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:trust_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ip_address&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;get_ip_address&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;vals&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c1"&gt;# Rewrite standard remote_ip field with value from header&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="c1"&gt;# See https://hexdocs.pm/plug/Plug.Conn.html&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;%{&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;remote_ip&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ip_address&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nc"&gt;Plug.Conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;to_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:inet&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ntoa&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ip_address&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;else&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nc"&gt;Plug.Conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;to_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:inet&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ntoa&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;get_peer_ip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;))))&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;defp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;get_ip_address&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;vals&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;defp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;get_ip_address&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]),&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;get_peer_ip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;defp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;get_ip_address&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;val&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# Split into multiple values&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;comps&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;val&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;|&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sx"&gt;~r{&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="sx"&gt;*,&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="sx"&gt;*}&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;|&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Enum&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ni"&gt;&amp;amp;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;!=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;unknown&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="c1"&gt;# Get rid of &amp;quot;unknown&amp;quot; values&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;|&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Enum&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hd&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ni"&gt;&amp;amp;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;:&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;))))&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="c1"&gt;# Split IP from port, if any&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;|&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Enum&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ni"&gt;&amp;amp;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;!=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="w"&gt;                 &lt;/span&gt;&lt;span class="c1"&gt;# Filter out blanks&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;|&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Enum&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parse_address&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ni"&gt;&amp;amp;1&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;&lt;span class="w"&gt;           &lt;/span&gt;&lt;span class="c1"&gt;# Parse address into :inet.ip_address tuple&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;|&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Enum&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;is_public_ip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ni"&gt;&amp;amp;1&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="c1"&gt;# Elminate internal IP addreses, e.g. 192.168.1.1&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;comps&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;get_peer_ip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;comp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;comp&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@spec&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;get_peer_ip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Plug.Conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:inet&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ip_address&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;defp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;get_peer_ip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;_port&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;peer&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;ip&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@spec&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;parse_address&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:inet&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ip_address&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;defp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;parse_address&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:inet&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parse_ipv4strict_address&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;to_charlist&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ip_address&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ip_address&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:einval&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:einval&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# Whether the input is a valid, public IP address&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# http://en.wikipedia.org/wiki/Private_network&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="na"&gt;@spec&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;is_public_ip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:inet&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ip_address&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;atom&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;boolean&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;defp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;is_public_ip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ip_address&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ip_address&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;do&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;false&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;192&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;168&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;false&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;172&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;second&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ow"&gt;when&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;second&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ow"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;second&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;false&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;127&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;false&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;_&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;true&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="ss"&gt;:einval&lt;/span&gt;&lt;span class="w"&gt;           &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;false&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="DevOps"/><category term="elixir"/><category term="phoenix"/><category term="plug"/><category term="rate limiting"/></entry><entry><title>Port forwarding with iptables</title><link href="https://www.cogini.com/blog/port-forwarding-with-iptables/" rel="alternate"/><published>2018-05-16T00:00:00+08:00</published><updated>2018-05-16T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-16:/blog/port-forwarding-with-iptables/</id><summary type="html">&lt;p&gt;In order to listen on a TCP port less than 1024, an app traditionally needs to
be started as root. Over the years this has resulted in many security problems.&lt;/p&gt;
&lt;p&gt;A better solution is to run the application on a normal port such as 4000, and
redirect traffic in the …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In order to listen on a TCP port less than 1024, an app traditionally needs to
be started as root. Over the years this has resulted in many security problems.&lt;/p&gt;
&lt;p&gt;A better solution is to run the application on a normal port such as 4000, and
redirect traffic in the firewall from e.g. port 80 to 4000 using iptables.&lt;/p&gt;
&lt;h1&gt;Port forwarding with ufw&lt;/h1&gt;
&lt;p&gt;&lt;a href="https://help.ubuntu.com/community/UFW"&gt;UFW&lt;/a&gt; is Ubuntu's "Uncomplicated Firewall".&lt;/p&gt;
&lt;h2&gt;Enable ufw&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;ufw&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;enable&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Allow traffic to the app port&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;ufw&lt;span class="w"&gt; &lt;/span&gt;allow&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;4000&lt;/span&gt;/tcp
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;UFW doesn't have an easy command to do port forwarding, unfortunately, so we
need to add a raw iptables rule.&lt;/p&gt;
&lt;p&gt;Edit &lt;code&gt;/etc/ufw/before.rules&lt;/code&gt;. At the top of the file,  add the following:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;*nat
:PREROUTING ACCEPT [0:0]
-A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 4000
COMMIT
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h1&gt;Port forwarding with raw iptables&lt;/h1&gt;
&lt;p&gt;First, open up access to the app port:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;iptables&lt;span class="w"&gt; &lt;/span&gt;-A&lt;span class="w"&gt; &lt;/span&gt;INPUT&lt;span class="w"&gt; &lt;/span&gt;-p&lt;span class="w"&gt; &lt;/span&gt;tcp&lt;span class="w"&gt; &lt;/span&gt;--dport&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;4000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-j&lt;span class="w"&gt; &lt;/span&gt;ACCEPT
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;We can also open the port with rate limiting, useful for dealing with DDOS
attacks. The following command allows five requests per minute from a single IP
address, with a burst of 10:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;iptables&lt;span class="w"&gt; &lt;/span&gt;-A&lt;span class="w"&gt; &lt;/span&gt;INPUT&lt;span class="w"&gt; &lt;/span&gt;-p&lt;span class="w"&gt; &lt;/span&gt;tcp&lt;span class="w"&gt; &lt;/span&gt;--dport&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;4000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-m&lt;span class="w"&gt; &lt;/span&gt;state&lt;span class="w"&gt; &lt;/span&gt;--state&lt;span class="w"&gt; &lt;/span&gt;NEW&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;-m&lt;span class="w"&gt; &lt;/span&gt;hashlimit&lt;span class="w"&gt; &lt;/span&gt;--hashlimit-name&lt;span class="w"&gt; &lt;/span&gt;HTTP&lt;span class="w"&gt; &lt;/span&gt;--hashlimit&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;/minute&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;--hashlimit-burst&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;--hashlimit-mode&lt;span class="w"&gt; &lt;/span&gt;srcip&lt;span class="w"&gt; &lt;/span&gt;--hashlimit-htable-expire&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;300000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-j&lt;span class="w"&gt; &lt;/span&gt;ACCEPT
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Finally, redirect port 80 to port 4000:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;iptables&lt;span class="w"&gt; &lt;/span&gt;-t&lt;span class="w"&gt; &lt;/span&gt;nat&lt;span class="w"&gt; &lt;/span&gt;-A&lt;span class="w"&gt; &lt;/span&gt;PREROUTING&lt;span class="w"&gt; &lt;/span&gt;-p&lt;span class="w"&gt; &lt;/span&gt;tcp&lt;span class="w"&gt; &lt;/span&gt;--dport&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;80&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;-j&lt;span class="w"&gt; &lt;/span&gt;REDIRECT&lt;span class="w"&gt; &lt;/span&gt;--to-port&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;4000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can see the rules with:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;iptables&lt;span class="w"&gt; &lt;/span&gt;-L&lt;span class="w"&gt; &lt;/span&gt;-n&lt;span class="w"&gt; &lt;/span&gt;-v
sudo&lt;span class="w"&gt; &lt;/span&gt;iptables&lt;span class="w"&gt; &lt;/span&gt;-t&lt;span class="w"&gt; &lt;/span&gt;nat&lt;span class="w"&gt; &lt;/span&gt;-L&lt;span class="w"&gt; &lt;/span&gt;-n&lt;span class="w"&gt; &lt;/span&gt;-v
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;We need to make the rules persistent so that they will still be there when the
machine is rebooted.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;apt&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;netfilter-persistent&lt;span class="w"&gt; &lt;/span&gt;iptables-persistent
sudo&lt;span class="w"&gt; &lt;/span&gt;netfilter-persistent&lt;span class="w"&gt; &lt;/span&gt;save
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h1&gt;Configuring iptables with Ansible&lt;/h1&gt;
&lt;p&gt;This &lt;a href="https://github.com/cogini/mix-deploy-example/"&gt;example project for deploying Phoenix apps&lt;/a&gt;
has Ansible tasks to set up iptables for port forwarding.&lt;/p&gt;</content><category term="DevOps"/><category term="iptables"/><category term="ansible"/></entry><entry><title>Rate limiting Nginx requests</title><link href="https://www.cogini.com/blog/rate-limiting-nginx-requests/" rel="alternate"/><published>2018-05-16T00:00:00+08:00</published><updated>2018-05-16T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-16:/blog/rate-limiting-nginx-requests/</id><summary type="html">&lt;p&gt;Any popular service may be the unfortunate recipient of a DDOS attack. We find
that DDOS load ends up driving capacity planning, as it can easily be 10x the
normal load.&lt;/p&gt;
&lt;p&gt;You can rate limit at multiple levels. You might use a service such as
CloudFlare, filtering provided by your …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Any popular service may be the unfortunate recipient of a DDOS attack. We find
that DDOS load ends up driving capacity planning, as it can easily be 10x the
normal load.&lt;/p&gt;
&lt;p&gt;You can rate limit at multiple levels. You might use a service such as
CloudFlare, filtering provided by your hosting provider, a firewall on the
local machine, &lt;a href="https://www.nginx.com/"&gt;Nginx&lt;/a&gt; rate limiting, or the application itself. If you are
getting attacked regularly, you will probably end up limiting at all levels,
with different thresholds.&lt;/p&gt;
&lt;p&gt;The earlier you limit, the fewer resources are used, but the less information
you have about the attack and the less control you have over how you respond.
If you are running e.g. an API endpoint it's important to be able to
distinguish an attack from a legit client gone wild.&lt;/p&gt;
&lt;h1&gt;Getting the user's IP address in Nginx&lt;/h1&gt;
&lt;p&gt;If Nginx is behind a CDN, then all the requests will come from the IP of the
CDN. In order to log the request and make decisions like rate limiting, we need
to get the IP address from a header set by the CDN.&lt;/p&gt;
&lt;p&gt;Add this to &lt;code&gt;nginx.conf&lt;/code&gt;, telling Nginx to get the IP address from the
&lt;code&gt;X-Forwarded-For&lt;/code&gt; header:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;real_ip_header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;set_real_ip_from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="s"&gt;.0.0.0/0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Note that this is not entirely trustworthy, as an attacker can still
send a request direct to the site with anything they like in the header.&lt;/p&gt;
&lt;p&gt;See &lt;a href="/blog/serving-your-phoenix-app-with-nginx/"&gt;Serving your Phoenix app with Nginx&lt;/a&gt;
for complete config file examples.&lt;/p&gt;
&lt;p&gt;Similarly, when Nginx proxies requests to the app, we need to pass the
request information through to the app.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;proxy_set_header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;Host&lt;/span&gt;&lt;span class="w"&gt;               &lt;/span&gt;&lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;proxy_set_header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;X-Real-IP&lt;/span&gt;&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;proxy_set_header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nv"&gt;$proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;proxy_set_header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;Refrerer&lt;/span&gt;&lt;span class="w"&gt;           &lt;/span&gt;&lt;span class="nv"&gt;$http_referer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;proxy_set_header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;User-Agent&lt;/span&gt;&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="nv"&gt;$http_user_agent&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h1&gt;Getting the user's IP address in Phoenix&lt;/h1&gt;
&lt;p&gt;Configure Phoenix to &lt;a href="/blog/getting-the-client-public-ip-address-in-phoenix/"&gt;read the user's IP from the
x-forwarded-for HTTP header&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;Rate limiting in Nginx&lt;/h1&gt;
&lt;p&gt;In &lt;code&gt;nginx.conf&lt;/code&gt; set up a rate limiting zone:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;limit_req_zone&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$binary_remote_addr&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;zone=foo:10m&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;rate=1r/s&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;limit_req_status&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;429&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In the vhost, limit requests for the zone:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;limit_req&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;zone=foo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;burst=5&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;nodelay&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="DevOps"/><category term="nginx"/><category term="rate limiting"/></entry><entry><title>Serving your Phoenix app with Nginx</title><link href="https://www.cogini.com/blog/serving-your-phoenix-app-with-nginx/" rel="alternate"/><published>2018-05-16T00:00:00+08:00</published><updated>2018-05-16T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-16:/blog/serving-your-phoenix-app-with-nginx/</id><summary type="html">&lt;p&gt;It's common to run web apps behind a proxy such as &lt;a href="https://www.nginx.com/"&gt;Nginx&lt;/a&gt; or HAProxy.
Nginx listens on port 80, then forwards traffic to the app on another port, e.g. 4000.&lt;/p&gt;
&lt;p&gt;Following is an example &lt;code&gt;nginx.conf&lt;/code&gt; config:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;user&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;nginx&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;worker_processes&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="s"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;error_log&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="s"&gt;/var/log/nginx/error.log&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pid …&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</summary><content type="html">&lt;p&gt;It's common to run web apps behind a proxy such as &lt;a href="https://www.nginx.com/"&gt;Nginx&lt;/a&gt; or HAProxy.
Nginx listens on port 80, then forwards traffic to the app on another port, e.g. 4000.&lt;/p&gt;
&lt;p&gt;Following is an example &lt;code&gt;nginx.conf&lt;/code&gt; config:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;user&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;nginx&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;worker_processes&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="s"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;error_log&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="s"&gt;/var/log/nginx/error.log&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;pid&lt;/span&gt;&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="s"&gt;/var/run/nginx.pid&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;worker_rlimit_nofile&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;65536&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;events&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;worker_connections&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;65536&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;use&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;epoll&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;multi_accept&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;http&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;real_ip_header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;set_real_ip_from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="s"&gt;.0.0.0/0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;server_tokens&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;off&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;include&lt;/span&gt;&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="s"&gt;/etc/nginx/mime.types&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;default_type&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="s"&gt;application/octet-stream&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;log_format&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="s"&gt;main&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&lt;/span&gt;&lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$remote_user&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;$time_local]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$request&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&lt;/span&gt;
&lt;span class="w"&gt;                   &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&lt;/span&gt;&lt;span class="nv"&gt;$status&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$body_bytes_sent&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$http_referer&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&lt;/span&gt;
&lt;span class="w"&gt;                   &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$http_user_agent&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt;$http_x_forwarded_for&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$request_time&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;access_log&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="s"&gt;/var/log/nginx/access.log&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;main&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;limit_req_zone&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$binary_remote_addr&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;zone=foo:10m&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;rate=1r/s&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;limit_req_status&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;429&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;include&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;/etc/nginx/conf.d/*.conf&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Here is a vhost for the app, e.g. &lt;code&gt;/etc/nginx/conf.d/foo.conf&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;listen&lt;/span&gt;&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;default_server&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;# server_name  example.com;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;root&lt;/span&gt;&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="s"&gt;/opt/foo/current/priv/static&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;access_log&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="s"&gt;/var/log/nginx/foo.access.log&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;main&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;error_log&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="s"&gt;/var/log/nginx/foo.error.log&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;location&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kn"&gt;index&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="s"&gt;index.html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# first attempt to serve request as file, then fall back to app&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kn"&gt;try_files&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$uri&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;@app&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# expires max;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# access_log off;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kn"&gt;location&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;@app&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kn"&gt;proxy_set_header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;Host&lt;/span&gt;&lt;span class="w"&gt;               &lt;/span&gt;&lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kn"&gt;proxy_set_header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;X-Real-IP&lt;/span&gt;&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kn"&gt;proxy_set_header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nv"&gt;$proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kn"&gt;proxy_set_header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;Refrerer&lt;/span&gt;&lt;span class="w"&gt;           &lt;/span&gt;&lt;span class="nv"&gt;$http_referer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kn"&gt;proxy_set_header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;User-Agent&lt;/span&gt;&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="nv"&gt;$http_user_agent&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kn"&gt;limit_req&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;zone=foo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;burst=5&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;nodelay&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kn"&gt;proxy_pass&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;http://127.0.0.1:4000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Proxy settings&lt;/h3&gt;
&lt;p&gt;The main setting that does the forwarding is &lt;code&gt;proxy_pass&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;You can set additional options depending on usage, e.g. if it's an API
endpoint, then you can reduce various buffers and timers to give better
response vs the defaults, which are for more generic web serving:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;proxy_intercept_errors&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;proxy_buffering&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;proxy_buffer_size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;128k&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;proxy_buffers&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;16k&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;proxy_busy_buffers_size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;256k&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;proxy_temp_file_write_size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;256k&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;proxy_max_temp_file_size&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;proxy_read_timeout&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;High load&lt;/h2&gt;
&lt;p&gt;Once you start pushing Nginx hard, you will see issues. One of the first
problems is OS limits on the number of open sockets. The sign of this is
clients see a 5-second delay to responses (or a 503 error), but the app logs
look fine, responding in milliseconds.&lt;/p&gt;
&lt;p&gt;What is happening is that the client talks to Nginx, then Nginx talks to your
app. When there are not enough filehandles available to open a connection to
the app, Nginx queues the request.&lt;/p&gt;
&lt;p&gt;You may start with 1024 by default, which is pitifully small. You will
need to raise that at each step in the config, e.g. systemd unit file, Nginx,
and Erlang VM.&lt;/p&gt;
&lt;p&gt;Create &lt;code&gt;/etc/systemd/system/nginx.service.d/override.conf&lt;/code&gt; with the following
contents:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;[Service]&lt;/span&gt;
&lt;span class="na"&gt;LimitNOFILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;65536&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;systemctl&lt;span class="w"&gt; &lt;/span&gt;daemon-reload
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In the nginx config file, set &lt;code&gt;worker_rlimit_nofile&lt;/code&gt;, smaller or equal to &lt;code&gt;LimitNOFILE&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;worker_rlimit_nofile&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;65536&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;systemctl&lt;span class="w"&gt; &lt;/span&gt;restart&lt;span class="w"&gt; &lt;/span&gt;nginx
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can verify that the limits have been increased for the process by running:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cat&lt;span class="w"&gt; &lt;/span&gt;/proc/&amp;lt;nginx-pid&amp;gt;/limits
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Running out of TCP ports&lt;/h2&gt;
&lt;p&gt;After that, you may run into lack of TCP ports. In TCP/IP, a connection is
defined by the combination of source IP + source port + destination IP +
destination port. In this proxy situation, all but the source port is fixed:
127.0.0.1 + random + 127.0.0.1 + 4000. There are only 64K ports. The
TCP/IP stack won't reuse a port for 2 x maximum segment lifetime, which by
default is 2 minutes.&lt;/p&gt;
&lt;p&gt;Doing the math:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;60000 ports / 120 sec = 500 requests per sec&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can tune the global kernel config to reduce the maximum segment lifetime, e.g.:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="gh"&gt;#&lt;/span&gt; Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15

&lt;span class="gh"&gt;#&lt;/span&gt; Recycle and Reuse TIME_WAIT sockets faster
net.ipv4.tcp_tw_reuse = 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The HTTP client may keep the connection open, assuming that there will be
another request. Depending on your use case (e.g. for an API endpoint), that
may not be needed. Shut it down immediately by adding the "&lt;code&gt;Connection: close&lt;/code&gt;"
HTTP header. This is particularly useful for abuse, e.g. DDOS attacks.&lt;/p&gt;
&lt;p&gt;See &lt;a href="https://www.cogini.com/blog/rate-limiting-nginx-requests/"&gt;rate limitmiting Nginx requests&lt;/a&gt;
for details about the rate limiting config in this example.&lt;/p&gt;
&lt;p&gt;Nginx also has &lt;a href="https://serverfault.com/questions/528653/how-can-i-stop-nginx-from-retrying-put-or-post-requests-on-upstream-server-timeo"&gt;some&lt;/a&gt;
&lt;a href="https://trac.nginx.org/nginx/ticket/488#comment:4"&gt;complex&lt;/a&gt;
&lt;a href="https://news.ycombinator.com/item?id=11217477"&gt;behavior&lt;/a&gt;
when it runs into errors when proxying.&lt;/p&gt;
&lt;p&gt;It can be hard to figure out what is going on, as you don't get visibility.
The Nginx business model is to hide the detailed proxy metrics unless you buy
their Nginx Plus product, which costs thousands of dollars per server per year.
A dedicated proxy server like &lt;a href="http://www.haproxy.org/"&gt;HAProxy&lt;/a&gt; gives
more visibility and control over the process.&lt;/p&gt;
&lt;h3&gt;Listening directly&lt;/h3&gt;
&lt;p&gt;At a certain point, you wonder what value you were getting from the local
proxy. If you are only running a single app on your instance, common in cloud
deployments, you can listen directly to HTTP traffic in Phoenix. That will give
you lower latency and overall lower complexity. This works fine: we have
Phoenix applications which handle billions of requests a day on the internet,
resisting regular DDOS attacks with no problems.&lt;/p&gt;
&lt;p&gt;In order to listen on a TCP port less than 1024, i.e. the standard port 80,
an app needs to be running as root (or have &lt;code&gt;CAP_NET_BIND_SERVICE&lt;/code&gt; capability).
Running as root increases the chance of security problems. If the application
has a vulnerability, then the attacker can do anything on the system. One
solution is to run the application on a normal port such as 4000, and
&lt;a href="https://www.cogini.com/blog/port-forwarding-with-iptables/"&gt;redirect traffic from port 80 to 4000 in the firewall using iptables&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Running multiple applications together&lt;/h3&gt;
&lt;p&gt;One place where running Nginx in front of the application makes sense
is when you are using Nginx to glue together multiple apps, e.g. using Phoenix to
improve performance of a Rails app. The first step is configuring Nginx to
route certain URL prefixes to Phoenix, e.g. &lt;code&gt;http://api.example.com/&lt;/code&gt; or &lt;code&gt;/api&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Beyond that, we need to integrate the applications, e.g. sharing a login
session between Phoenix and Rails. This depends on the specific authentication
frameworks used by each app.&lt;/p&gt;
&lt;p&gt;We can also implement the UI and navigation on Phoenix to match a Rails app,
allowing users to seamlessly work between both apps. The only thing the user
will notice is that the Phoenix pages are 10x faster :-)&lt;/p&gt;
&lt;p&gt;See &lt;a href="https://www.cogini.com/blog/incrementally-migrating-a-legacy-app-to-phoenix/"&gt;this blog post on migrating legacy apps&lt;/a&gt;
or &lt;a href="https://www.cogini.com/blog/presentation-incrementally-migrating-large-rails-apps-to-phoenix/"&gt;this presentation&lt;/a&gt;
for details.&lt;/p&gt;
&lt;p&gt;Another option is to have Phoenix handle the routing, e.g. with
&lt;a href="https://github.com/poteto/terraform"&gt;Terraform&lt;/a&gt;.&lt;/p&gt;</content><category term="DevOps"/><category term="nginx"/><category term="phoenix"/><category term="elixir"/></entry><entry><title>Serving Phoenix static assets from a CDN</title><link href="https://www.cogini.com/blog/serving-phoenix-static-assets-from-a-cdn/" rel="alternate"/><published>2018-05-11T00:00:00+08:00</published><updated>2018-05-11T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-11:/blog/serving-phoenix-static-assets-from-a-cdn/</id><summary type="html">&lt;p&gt;Phoenix is fast, but you can improve performance by serving requests for static
files like images, CSS and JS from Nginx or a content delivery network (CDN).
This allows your app to focus on dynamic content.&lt;/p&gt;
&lt;h1&gt;Serving static assets from Nginx&lt;/h1&gt;
&lt;p&gt;If you are &lt;a href="https://www.cogini.com/blog/serving-your-phoenix-app-with-nginx/"&gt;running your app behind Nginx&lt;/a&gt;,
configure …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Phoenix is fast, but you can improve performance by serving requests for static
files like images, CSS and JS from Nginx or a content delivery network (CDN).
This allows your app to focus on dynamic content.&lt;/p&gt;
&lt;h1&gt;Serving static assets from Nginx&lt;/h1&gt;
&lt;p&gt;If you are &lt;a href="https://www.cogini.com/blog/serving-your-phoenix-app-with-nginx/"&gt;running your app behind Nginx&lt;/a&gt;,
configure Nginx to serve the static files. Set the &lt;code&gt;root&lt;/code&gt; dir in the vhost to
point to the &lt;code&gt;priv&lt;/code&gt; dir in the release:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;root&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;/opt/myorg/foo/current/priv/static&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h1&gt;Serving assets from a CDN&lt;/h1&gt;
&lt;p&gt;A better choice for production apps is to use a CDN. In addition to offloading
requests, it also caches your content in servers close to your customers,
improving network latency. A CDN like CloudFlare can also protect your app from
DDOS attacks.&lt;/p&gt;
&lt;p&gt;Some CDNs work as a read-through cache. If the CDN gets a request for a file
that is not in its cache, then it contacts the app to get it, then caches it.
With others, you have to separately upload assets to the CDN when deploying.&lt;/p&gt;
&lt;p&gt;In a simple app deployed to a single server, we deploy the app and its
assets together, so the assets are always in sync with the code.&lt;/p&gt;
&lt;p&gt;With HTTP-level caching, the cache will by definition serve an old version of
the file. As you make changes to your static assets, you need to make sure that
the application uses the version of the assets corresponding to the version of
the code that's running.&lt;/p&gt;
&lt;p&gt;If you deploy a new version, the code should use the new assets. If you roll
back, it should go back to the old version.  Similarly, if you are doing a Blue
/ Green rolling deploy of your app in an auto scaling group, you will have a
mix of old and new app versions sharing the CDN.&lt;/p&gt;
&lt;h1&gt;Building static assets in Phoenix&lt;/h1&gt;
&lt;p&gt;Fortunately, Phoenix handles the process of generating unique names for
your assets. It maps a generic name like &lt;code&gt;/js/app.js&lt;/code&gt; in your template
into a versioned name like &lt;code&gt;/js/app-8f0317e89884de8b7b3a685928bee5e7.js&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;When we update the assets, they get a unique id, and the client will load
the new version. The unique filenames mean that multiple versions of
the same file can coexist in the cache.&lt;/p&gt;
&lt;p&gt;We can also set the cache lifetime to infinity everywhere, as they will
never change. This lets the browser and other caches keep them around longer
for better performance.&lt;/p&gt;
&lt;p&gt;The first step is to &lt;a href="https://hexdocs.pm/phoenix/deployment.html#compiling-your-application-assets"&gt;compile your application
assets for production&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mix&lt;span class="w"&gt; &lt;/span&gt;deps.get&lt;span class="w"&gt; &lt;/span&gt;--only&lt;span class="w"&gt; &lt;/span&gt;prod
&lt;span class="nv"&gt;MIX_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prod&lt;span class="w"&gt; &lt;/span&gt;mix&lt;span class="w"&gt; &lt;/span&gt;compile
&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;assets&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;webpack&lt;span class="w"&gt; &lt;/span&gt;--mode&lt;span class="w"&gt; &lt;/span&gt;production&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;MIX_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prod&lt;span class="w"&gt; &lt;/span&gt;mix&lt;span class="w"&gt; &lt;/span&gt;phx.digest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This builds assets under &lt;code&gt;priv/static&lt;/code&gt;. When we deploy a new release to production,
we need to copy the new asset files into the CDN.&lt;/p&gt;
&lt;p&gt;For example, we can sync the files to the S3 bucket backing your CloudFront
distribution.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;aws&lt;span class="w"&gt; &lt;/span&gt;s3&lt;span class="w"&gt; &lt;/span&gt;sync&lt;span class="w"&gt; &lt;/span&gt;priv/static&lt;span class="w"&gt; &lt;/span&gt;s3://&lt;span class="nv"&gt;$CDN_ASSETS_BUCKET&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h1&gt;Configuring DNS&lt;/h1&gt;
&lt;p&gt;In order to use the CDN, Phoenix needs to generate URLs that point to it.
First, we set up a DNS &lt;code&gt;CNAME&lt;/code&gt; record pointing &lt;code&gt;http://assets.example.com/&lt;/code&gt; to
the URL of your CDN.&lt;/p&gt;
&lt;p&gt;If you are using Amazon AWS and CloudFront CDN, then you should use Amazon
Route53 for your DNS, as it supports a special &lt;code&gt;ALIAS&lt;/code&gt; record that works like a
&lt;code&gt;CNAME&lt;/code&gt;, but follows the underlying resource if it changes. Route53 also allows
you to alias bare domains, e.g. http://example.com/, which is otherwise not
allowed.&lt;/p&gt;
&lt;p&gt;For high volume sites, we want to reduce the amount of data transferred. Make a
separate domain like &lt;code&gt;mycdn.com&lt;/code&gt; to serve your static assets, and it will not
have the cookies associated with your main domain. This is also better for
security, as session cookies are not sent to the CDN.&lt;/p&gt;
&lt;h1&gt;Configuring Phoenix to use the CDN&lt;/h1&gt;
&lt;p&gt;Configure your app to use the CDN by setting the &lt;code&gt;static_url&lt;/code&gt;
parameters in &lt;a href="https://hexdocs.pm/phoenix/Phoenix.Endpoint.html"&gt;the endpoint config&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:foo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;FooWeb.Endpoint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;example.com&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;scheme&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;https&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;443&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:inet6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:system&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;PORT&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;}],&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;static_url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;scheme&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;https&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;assets.example.com&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;443&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="ss"&gt;cache_static_manifest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;priv/static/cache_manifest.json&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;See &lt;a href="https://shift.infinite.red/how-to-set-up-a-cdn-in-phoenix-af89074e0a62"&gt;this article&lt;/a&gt; for more
details.&lt;/p&gt;
&lt;h1&gt;Getting the user's public IP&lt;/h1&gt;
&lt;p&gt;When you are running behind a CDN, requests will look like they come from the
CDN.  In order to get the user's actual IP, we need to look at the headers set
by the CDN.&lt;/p&gt;
&lt;p&gt;See "&lt;a href="https://www.cogini.com/blog/serving-your-phoenix-app-with-nginx/"&gt;Serving your Phoenix app with Nginx&lt;/a&gt;"
and "&lt;a href="https://www.cogini.com/blog/getting-the-client-public-ip-address-in-phoenix/"&gt;Getting the client public IP address in Phoenix&lt;/a&gt;"
for details.&lt;/p&gt;</content><category term="DevOps"/><category term="elixir"/><category term="phoenix"/><category term="static assets"/><category term="cdn"/></entry><entry><title>Advantages of Elixir vs Golang</title><link href="https://www.cogini.com/blog/advantages-of-elixir-vs-golang/" rel="alternate"/><published>2018-05-08T00:00:00+08:00</published><updated>2018-05-08T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-08:/blog/advantages-of-elixir-vs-golang/</id><summary type="html">&lt;p&gt;A prospect recently asked me what the advantages are of Elixir over Golang.&lt;/p&gt;
&lt;p&gt;The simple answer is productivity. You get the best of both worlds: the
productivity of a high level language with the scaling power of the mature
Erlang platform.&lt;/p&gt;
&lt;p&gt;Go is a low level language, and performance is …&lt;/p&gt;</summary><content type="html">&lt;p&gt;A prospect recently asked me what the advantages are of Elixir over Golang.&lt;/p&gt;
&lt;p&gt;The simple answer is productivity. You get the best of both worlds: the
productivity of a high level language with the scaling power of the mature
Erlang platform.&lt;/p&gt;
&lt;p&gt;Go is a low level language, and performance is good, but it lacks
the productivity features of modern languages. It was developed for making
relatively low-level services at Google's scale, e.g. HTTP routing
infrastructure. When you are operating at their size, you need performance, but
the complexity of C++ is hard to deal with.  You need concurrency, but
multi-threaded network programming is error prone.  I spent years making VoIP
applications in C++, so I know this pain.&lt;/p&gt;
&lt;p&gt;It is also a reaction to the tendency for smart C++ and Java programmers to
create abstractions which make systems harder to maintain over time.  The
layers make it hard to jump into a big codebase and fix a problem. This is a
particular issue for SREs who are not just programmers, they also have to keep
on top of the challenges of cloud infrastructure, networking, storage, etc.&lt;/p&gt;
&lt;p&gt;Higher level scripting languages like Python are hard to operate at scale.
They suffer from poor performance and lack of concurrency. Dynamic typing makes
it hard to avoid errors at runtime, requiring lots of testing. Dependencies
make them difficult to deploy, so the ability to simply copy a binary to a
server is very attractive.&lt;/p&gt;
&lt;p&gt;Go is basically a simplified version of C++, a kind of "blue collar" language.
&lt;a href="https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440"&gt;The Go Programming Language&lt;/a&gt;
describes it this way:&lt;/p&gt;
&lt;p&gt;"The Go project includes the language itself, its tools and standard libraries,
and last but not least, a cultural agenda of radical simplicity. As a recent
high-level language, Go has the benefit of hindsight, and the basics are done
well: it has garbage collection, a package system, first-class functions,
lexical scope, a system call interface, and immutable strings in which text is
generally encoded in UTF-8. But it has comparatively few features and is
unlikely to add more. For instance, it has no implicit numeric conversions, no
constructors or destructors, no operator overloading, no default parameter
values, no inheritance, no generics, no exceptions, no macros, no function
annotations, and no thread-local storage. The language is mature and stable,
and guarantees backwards compatibility: older Go programs can be compiled and
run with newer versions of compilers and standard libraries."&lt;/p&gt;
&lt;p&gt;There is a certain "embrace the suck" attitude in Go. Deploying large systems
at scale sucks, so we choose simple tools that will always work and push
through the problems. I can understand this perspective, but I am not so cynical.
It ignores the improved productivity and safety that we can get through modern
programming language features.&lt;/p&gt;
&lt;p&gt;Ericsson was seeing similar issues when they created Erlang, the basis for
Elixir. They had been building their telephone switches in low level languages
and it was getting out of control. The systems were complex, buggy, and
expensive to develop. Their solution was a combination of a low-level runtime
to handle the problems of networking and concurrency once and for all, combined
with a high level language and framework to make programming easier.&lt;/p&gt;
&lt;p&gt;Erlang's distinguishing features are concurrency and fault tolerance.  The
lightweight process model makes it straightforward to create systems which
scale to millions of stateful connections, e.g. WhatsApp.&lt;/p&gt;
&lt;p&gt;The platform has great depth of tools to create, debug and manage large
production systems. The OTP framework standardizes patterns for building
services out of components. The platform includes features like an in-memory
key/value store, process registry, etc., as well as built-in solutions for
issues like production system tracing, high volume logging, alerting and
metrics.&lt;/p&gt;
&lt;p&gt;Elixir starts with the mature Erlang platform and adds powerful language
features like lisp-style macros and protocols. We get the ease of use of object
oriented languages, without the tight coupling between components. Functional
programming gives us generic algorithms which work across all data. Pattern
matching makes logic simpler. Binary matching syntax makes it easy to implement
network protocols reliably with high performance. Immutability and lack of side
effects make systems easier to reason about and debug. Unlike academic
functional languages like Haskell, the language is focused on practical
industrial programming, not type theory. Data structures are straightforward
and easy to understand.  Everyone talks about the concurrency, because it's
special, but the language itself is legitimately a joy to program in.&lt;/p&gt;
&lt;p&gt;As an example, error handling in Go &lt;a href="https://github.com/confluentinc/confluent-kafka-go"&gt;looks like this&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;:=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;foo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;!=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;nil&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nb"&gt;panic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nx"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;:=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;foo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bar&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;nil&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;fmt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Message on %s: %s\n&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Topic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;fmt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Client error: %v (%v)\n&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;break&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The equivalent code in Elixir would simply be:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Foo.Client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;Foo.Client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;foo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;bar&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Elixir's pattern matching lets us program for the success case. If we get an
error return (e.g. &lt;code&gt;{:error, reason}&lt;/code&gt;), then the unhandled match will fail, and the
thread will exit. It will write a backtrace to the log with all the context of the
call so we can duplicate the problem in our dev environment.&lt;/p&gt;
&lt;p&gt;A supervisor monitors the process and manages all errors, including ones we may
have missed. This is different from exceptions, as it allows us to actually deal
with errors, e.g. retrying a call on connection timeout.&lt;/p&gt;
&lt;p&gt;Elixir is taking the opportunity to rethink and refine a well established
system. Mature languages accumulate cruft over time. For example, Java has
multiple date-time classes (java.util.Date, java.sql.Date, Calendar) and we
have to convert between them. Functions have parameters in various orders, so
we have to keep looking at the docs. When Microsoft created C# and .NET, they
had the benefit of learning from Java, which helped them to quickly create a
full, consistent standard library and virtual machine. Elixir does that for
Erlang, but can also take advantage advantage of all the existing Erlang
libraries.&lt;/p&gt;
&lt;p&gt;Similarly, a lot of the Elixir community comes from Rails, as the creator of
Elixir, José Valim, was a Rails committer. He started with the mature Rails
system and did it again, better, focusing on the problems he had maintaining
large Rails projects that had evolved over time. The platform has better
performance and reliability, but also takes a step back on "magic", as some of
the features which make simple projects easy end up causing problems when
they grow bigger.  In addition to standard MVC, the Phoenix web framework
(http://phoenixframework.org/) provides a "channels" abstraction which makes it
easy to create stateful web applications, e.g. web chat systems. There is a
GraphQL server as well which integrates with Phoenix.&lt;/p&gt;
&lt;p&gt;As with Ruby on Rails, programmer productivity and ease of use are a primary
focus for the community. Everything works well out of the box, and there are
standard, well integrated tools for testing, asset pipeline, deployment, etc.
People have taken the libraries that they loved from Ruby and implemented them
for Elixir. There are also cutting edge tools for static analysis and property
testing coming from the academic community.&lt;/p&gt;
&lt;h1&gt;The sweet spot for Go&lt;/h1&gt;
&lt;p&gt;The most interesting applications of Golang for me are network services which
need to be very fast, i.e. they are CPU bound. Garbage collection avoids a
major class of errors, making it safer. Go has built in concurrency using the
CSP model, and it can call into C libraries very efficiently. Good applications
are things like deep content inspection or running machine learning models.
When you are operating at huge scale, cost of hardware starts to actually
be more important than programmer time.&lt;/p&gt;
&lt;h1&gt;The concurrency crisis&lt;/h1&gt;
&lt;p&gt;Most of us are not writing applications at the scale of Google, but we still
need to efficiently use hardware we have. To do that, we need languages that can
handle concurrency, but the current crop of languages and platforms face
challenges. It is hard to make existing languages and libraries safe, as it
breaks programmer assumptions. Network communication libraries need to be
rewritten to be non-blocking or use threads. Worse, shared data needs locking
to protect concurrent access. This has made it very difficult for existing
scripting languages like Python, Ruby and PHP to support concurrency. Node.js
is built around non-blocking IO, but can't take advantage of more than one CPU
without hacks like multiple processes. There is too much exposed machinery, and
the language lacks type safety. Java uses a similar thread-based approach to
concurrency as C++, but pervasive object orientation creates potential locking
problems for every object. Rust has potential as a safe replacement for C to do
systems programming, with concurrency. It is too low level for general
application development productivity, though.&lt;/p&gt;
&lt;p&gt;Elixir has had support for concurrency from day one, and it has 30+ years of
tooling developed for Erlang. While the absolute performance is less than
compiled languages, it is easy to parallelize tasks to make use of the machine.
If that's not enough, we can deploy the app to work across a cluster
of servers with few changes. This is the difference between &lt;em&gt;speed&lt;/em&gt; and &lt;em&gt;scalability&lt;/em&gt;.
Go focuses on low level performance and relies on systems like Kubernetes
to scale. That has its own nest of complexity to deal with, though.&lt;/p&gt;
&lt;p&gt;It is easier to add libraries to Elixir for practical web programming tasks
than it is to make other systems concurrent and reliable. We are using it to
build large systems today, and we know it works. Instead of fighting to
retrofit concurrency to existing languages, we can get on building the
next generation of systems.&lt;/p&gt;</content><category term="Development"/><category term="elixir"/><category term="golang"/><category term="go"/><category term="languages"/></entry><entry><title>Yield optimization vs customer service</title><link href="https://www.cogini.com/blog/yield-optimization-vs-customer-service/" rel="alternate"/><published>2018-05-08T00:00:00+08:00</published><updated>2018-05-08T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-05-08:/blog/yield-optimization-vs-customer-service/</id><summary type="html">&lt;p&gt;Whenever we try to squeeze the last bit of utilization out of a system, there
is a danger that it will have a big negative impact on the user experience.  A
great example of this is overbooking in the airline industry. Usually a bad
customer experience does not involve getting …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Whenever we try to squeeze the last bit of utilization out of a system, there
is a danger that it will have a big negative impact on the user experience.  A
great example of this is overbooking in the airline industry. Usually a bad
customer experience does not involve getting &lt;a href="http://www.bbc.com/news/world-us-canada-39554421"&gt;beaten and dragged from an
aircraft&lt;/a&gt;, but avoiding
over-optimization definitely provides better service.&lt;/p&gt;
&lt;p&gt;An example is a recent experience I had with a hotel. I flew to Shanghai to work with
a customer, taking a morning flight and arriving at their office in the early
afternoon. We got right to work with product design meetings. We went to dinner,
then then they dropped me off at my hotel, arriving at 8:10 PM.&lt;/p&gt;
&lt;p&gt;When I checked in, the lady working the front desk informed me that they
had canceled my reservation at 8 PM. She was not exactly rude, but had clearly
had the same conversation multiple times. She had no ability to help me and
was tired of this situation.&lt;/p&gt;
&lt;p&gt;My customer had booked the room from a portal website, and there was no
indication about check in time. In this case, we were completely flexible. The
hotel could have contacted us before the reservation expired, and we would have
checked in before dinner. They had no contact information for us, though, and
no mechanism to do it via the portal.&lt;/p&gt;
&lt;p&gt;So now I had to find another room. We went back to the portal, and found the
same hotel showing rooms available. Just for fun, we booked a room, then told
the staff. She said it doesn't work like that. In fact, after someone makes a
booking on line, the portal notifies the hotel, and they accept or reject the
booking. There is no real-time information about whether rooms are available.
Most hotels sell their rooms through multiple channels, so it's common for them
to be double booked. They don't update the room status on the portals in real
time. The staff is used to having unhappy customers, but it's out of their
hands.&lt;/p&gt;
&lt;p&gt;China has some interesting rules about hotel prices. It's illegal to charge
more for a room during peak periods, so what they do is give variable discounts.
Generally speaking, if you contact a hotel directly, they have a lower discount
than going through a portal. The staff in the hotel has no incentive to match
an online price, especially when they are sold out. I have had hotels cancel
my booking because they can get more money for the room from a new customer.&lt;/p&gt;
&lt;p&gt;There was a big trade show going on, so it was hard to find a room.  It took
about an hour of calling around before we found one. This wasn't a crisis for
me, but imagine someone who didn't know Chinese, landing after a long
international flight for the first time in a new city, with no local support
structure. What if I missed my checkin because my flight was delayed, and I had
no way to communicate? Not a great experience. This kind of optimization
represents a small short-term improvement for the hotel, but a huge problem for
the customer. It is classic short term thinking.&lt;/p&gt;
&lt;p&gt;In the past, hotel bookings were controlled by the hotel itself. If they didn't
sell all their inventory, then that was just the way it was.  Now with the
portals, it's attractive to optimize yields, trying to sell every last space
for whatever price possible. In order to maximize further, they overbook,
expecting a certain percentage of cancellations and no-shows.&lt;/p&gt;
&lt;h1&gt;Doing better&lt;/h1&gt;
&lt;p&gt;We need to optimize intelligently to mitigate the effects on user experience.&lt;/p&gt;
&lt;h2&gt;Automation breeds inflexibility&lt;/h2&gt;
&lt;p&gt;Systems based on people and paper have the ability to deal with problems.
Automated systems only behave in one way, and customer service staff can't fix
them.  With paper, we could put a sticky note on a form, e.g. saying that a
customer called to say they will arrive late. Adding a "notes" field to
your database for customers, orders, and suppliers can provide great
flexibility and value.&lt;/p&gt;
&lt;p&gt;Optimized systems may separate the decisions from the information needed to
make them. When there are communications problems, it can cause the process to
fail. Give your people authority to solve problems. Robots should not boss
people around.&lt;/p&gt;
&lt;h2&gt;Stressing your system&lt;/h2&gt;
&lt;p&gt;Pushing for the last bit of utilization puts stress on your system, exposing
weaknesses.  You have to think of every contingency, and execution has to be
perfect.  If the hotel always has an extra room in the basement that it doesn't
use, then it can solve all kinds of problems.&lt;/p&gt;
&lt;p&gt;Sometimes it's just math. An example from queuing theory: suppose we have a
bank with tellers who can serve a customer in five minutes, and customers
arrive every five minutes. With no variation, there is no waiting. In practice,
even a 5% variation will cause the queue to blow up at random times. Adding a
little extra capacity makes the queue go away forever.&lt;/p&gt;
&lt;p&gt;Optimization can cause catastrophic failure. The supply chain management trend
to minimize inventory that most stores have little or no stock, they get
replenished frequently. If there is a problem with the transportation network,
we will all have no food. What is your backup plan? How will your business work
if the internet goes down? &lt;a href="https://www.cogini.com/blog/an-example-of-user-stories/"&gt;Design your system to be resilient to
failures&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;Context&lt;/h1&gt;
&lt;p&gt;Get as much context as possible and use it to inform your decisions.
Get the customer's flight information, and you know when they will arrive.
Track the flight, and you know when it is delayed, and you can hold their
room for them. If they are a frequent guest, then you can be pretty sure that
they will show up. Treat it as a long term relationship.&lt;/p&gt;
&lt;p&gt;When you are going through an intermediary, learn as much about the customer as
you can. Get their communication details. Sell direct. When you work through
partners, they control the customer. You become a commodity. Sites like
booking.com control a huge percentage of hotel bookings, often over 75%. They
also have a better user experience than the average hotel website. The customer
has a better selection, better pricing, and they can store their credit card
details to make bookings easier. It's one thing to pay a commission to be
discovered the first time, but hotels lose the same commission when the
customer comes back again. Make the direct, long term experience &lt;em&gt;better&lt;/em&gt; than
that of the intermediaries.&lt;/p&gt;</content><category term="Products"/><category term="saas"/><category term="user experience"/><category term="optimization"/><category term="design"/></entry><entry><title>Incrementally migrating large Rails apps to Phoenix</title><link href="https://www.cogini.com/blog/presentation-incrementally-migrating-large-rails-apps-to-phoenix/" rel="alternate"/><published>2018-04-28T00:00:00+08:00</published><updated>2018-04-28T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-04-28:/blog/presentation-incrementally-migrating-large-rails-apps-to-phoenix/</id><content type="html">&lt;p&gt;Here are the slides for the &lt;a href="https://www.cogini.com/files/incrementally-migrating-apps-to-phoenix.pdf"&gt;presentation on incrementally migrating large Rails
apps to Phoenix&lt;/a&gt; I gave at &lt;a href="https://2018.rubyconf.tw/program#jake-morrison"&gt;Ruby
Elixir Conf Taiwan 2018&lt;/a&gt;.&lt;/p&gt;</content><category term="Development"/><category term="presentations"/><category term="elixir"/><category term="phoenix"/><category term="ruby"/><category term="migrating"/></entry><entry><title>Is Elixir/Phoenix ready for production?</title><link href="https://www.cogini.com/blog/is-elixir-phoenix-ready-for-production/" rel="alternate"/><published>2018-03-11T00:00:00+08:00</published><updated>2018-03-11T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-03-11:/blog/is-elixir-phoenix-ready-for-production/</id><summary type="html">&lt;p&gt;How I evaluated Elixir in 2014 when we were deciding whether it was mature enough&lt;/p&gt;</summary><content type="html">&lt;p&gt;Someone &lt;a href="https://elixirforum.com/t/is-elixir-phoenix-ready-for-production/11749/17"&gt;asked this question&lt;/a&gt;
on the Elixir Forum. Following is my answer:&lt;/p&gt;
&lt;p&gt;When I first saw Elixir, we were developing custom apps (e-commerce, CRUD, etc)
with Ruby on Rails, Python, and PHP.&lt;/p&gt;
&lt;p&gt;We were using Erlang for the “tricky bits”, e.g. IoT and real time web, and
liked it a lot. In 2006, we tried to use Erlang for web development. The core
was great, but the rest of the ecosystem was lacking (e.g. database interfaces,
templates, automatic page loading). Productivity was not great, so we ended up
making hybrid apps.&lt;/p&gt;
&lt;p&gt;When I saw &lt;a href="https://littlelines.com/blog/2014/07/08/elixir-vs-ruby-showdown-phoenix-vs-rails"&gt;Chris McCord’s post comparing Rails and
Phoenix&lt;/a&gt;
in 2014, I was really excited. The metaprogramming capabilities of Elixir let
them build a web framework that combines the power of Erlang with the ease of
use of Rails.&lt;/p&gt;
&lt;p&gt;I wanted to make sure that it would be ok to bet the company on Elixir and
Phoenix. The basic productivity was great, and the Elixir and Phoenix teams
focus a lot on developer experience and getting started. I knew we would be
able to deliver custom development projects efficiently.&lt;/p&gt;
&lt;p&gt;My next question was whether we would have the libraries we needed for various
project requirements, e.g. interfacing with credit card payment systems. It is
fine to develop a few things, but it’s hard for a startup project budget and
timeline to cover development of basic libraries. As a safety valve, I looked
at calling Python and Ruby from Elixir using tools like &lt;a href="https://github.com/arthurcolle/elixir-snake"&gt;Elixir
Snake&lt;/a&gt; and
&lt;a href="https://github.com/hdima/erlport"&gt;Erlport&lt;/a&gt;. They worked fine, and I knew that
we would be ok. In practice, we haven’t needed to do that much, all the
libraries we have needed have been available or we could write them quickly. We
have mainly ended up using Python for things like
&lt;a href="https://pandas.pydata.org/"&gt;Pandas&lt;/a&gt; for data analysis.&lt;/p&gt;
&lt;p&gt;In addition to standard web development tasks, Elixir and Phoenix support the
next-generation “stateful web” applications like chat that are really hard to
build any other way. One thing that I really like is that a single platform can
do it all, i.e. public web, back end CRUD for admin, mobile APIs, interfaces to
3rd party APIs, and real time messaging. It actually simplifies development a
lot, because we don’t need to use multiple languages, multiple servers,
background job queues, etc.&lt;/p&gt;
&lt;p&gt;In the last three years, we have done all our new projects in Elixir, and it’s
worked fine. You don’t have to worry about OTP when getting started, you can build
apps with just Phoenix. You will want to learn it in the future, though, as it
is where the platform gets a lot of its power. As for difficulty finding
developers, we haven’t found it hard for devs to get up to speed on Elixir and
Phoenix. Within two weeks they are fully productive, particularly if they have
experienced people available to help.&lt;/p&gt;
&lt;p&gt;There is some learning curve associated with functional development. It’s
similar to learning object oriented programming. You can start programming
immediately, but it takes six months before you are really thinking
functionally. Then you find it difficult to use barbaric languages with
mutation and miss pattern matching dearly :-).&lt;/p&gt;</content><category term="Development"/><category term="elixir"/><category term="phoenix"/><category term="languages"/><category term="python"/><category term="ruby"/></entry><entry><title>Setting Ansible variables based on the environment</title><link href="https://www.cogini.com/blog/setting-ansible-variables-based-on-the-environment/" rel="alternate"/><published>2018-03-11T00:00:00+08:00</published><updated>2018-03-11T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-03-11:/blog/setting-ansible-variables-based-on-the-environment/</id><summary type="html">&lt;p&gt;When deploying applications, we we usually have the same basic architecture in
different environments (dev, test, prod), but settings differ.  Some settings
are common to all the machines in the environment, e.g. the db server connection
string. We need to vary the size of instances depending on the environment …&lt;/p&gt;</summary><content type="html">&lt;p&gt;When deploying applications, we we usually have the same basic architecture in
different environments (dev, test, prod), but settings differ.  Some settings
are common to all the machines in the environment, e.g. the db server connection
string. We need to vary the size of instances depending on the environment, and
we need to keep application secrets like passwords by environment.&lt;/p&gt;
&lt;p&gt;What we would like is to put a machine in multiple groups, setting some
defaults for the whole system, then overriding them by role and environment.
Unfortunately that doesn't work in Ansible. There are priority rules between
var sources, but all groups are the same. The Ansible "best practice"
(limitation) is that a variable should be defined in one and only one place.&lt;/p&gt;
&lt;p&gt;If your environments are relatively static, e.g. dedicated servers, then you
can do it as follows:&lt;/p&gt;
&lt;p&gt;Use the &lt;code&gt;all&lt;/code&gt; group to set defaults for all servers.  Next set group-specific
variables by server "role", which take priority over &lt;code&gt;all&lt;/code&gt;. Finally, set
host-specific vars, which take priority over group vars.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;inventory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;group_vars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;all&lt;/span&gt;
&lt;span class="n"&gt;inventory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;group_vars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;
&lt;span class="n"&gt;inventory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;group_vars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;
&lt;span class="n"&gt;inventory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;host_vars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If you are using the Ansible vault (recommended), these are directories, so you
end up with something like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;inventory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;group_vars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;vars&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;yml&lt;/span&gt;
&lt;span class="n"&gt;inventory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;group_vars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;vault&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Define your groups in &lt;code&gt;inventory/hosts&lt;/code&gt; and add servers to them.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;[web-servers]&lt;/span&gt;
&lt;span class="na"&gt;server-01&lt;/span&gt;

&lt;span class="k"&gt;[app-servers]&lt;/span&gt;
&lt;span class="na"&gt;server-02&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If you have environment-specific vars, then you can make:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;inventory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;group_vars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;
&lt;span class="n"&gt;inventory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;group_vars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;
&lt;span class="n"&gt;inventory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;group_vars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;prod&lt;/span&gt;
&lt;span class="n"&gt;inventory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;group_vars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;prod&lt;/span&gt;
&lt;span class="n"&gt;inventory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;group_vars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;
&lt;span class="n"&gt;inventory&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;group_vars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;


&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;
&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;03&lt;/span&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;02&lt;/span&gt;
&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;prod&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;03&lt;/span&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;prod&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;02&lt;/span&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;web&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;servers&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You end up duplicating some variables to deal with the lack of hierarchy. You
can also use AWS dynamic inventory to assign servers to roles using tags. It
supports having lots of servers.&lt;/p&gt;
&lt;p&gt;This system breaks down if you have lots of applications and environments,
though, e.g. multiple copies of the same app in production for different
customers.  One of our customers has a dozen apps deployed to AWS, each running
in dev/test/staging/demo/prod environments. In addition, they run production
environments in multiple regions, (US, EU, China, etc.).&lt;/p&gt;
&lt;p&gt;In this case, we use a different structure to keep the combinational explosion of
variables under control.&lt;/p&gt;
&lt;p&gt;Make a playbook that sets up the machine or app, e.g. &lt;code&gt;playbooks/myapp/web-server.yml&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;Configure web server&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;remote_user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;ubuntu&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;hosts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;*&amp;#39;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;become&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;true&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;gather_facts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;true&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;vars_files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;vars/myapp/{{ env }}/app.yml&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;vars/myapp/{{ env }}/common.yml&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;vars/myapp/{{ env }}/datadog.yml&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;vars/myapp/{{ env }}/keys.yml&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ubuntu-common&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;datadog.datadog&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;when&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&amp;quot;env&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;==&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;prod&amp;quot;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;or&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;env&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;==&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;demo&amp;quot;&amp;#39;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Then make vars files for each combination of app and env, e.g. &lt;code&gt;vars/myapp/prod/app.yml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;vars&lt;/code&gt; directory is relative to the playbook. So you it could be
&lt;code&gt;playbooks/vars/myapp/dev/app.yml&lt;/code&gt; or at the top level. In that case the
&lt;code&gt;vars_files&lt;/code&gt; would be &lt;code&gt;../vars/myapp/{{ env }}/app.yml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Finally, call the playbook specifying the environment:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ansible-playbook&lt;span class="w"&gt; &lt;/span&gt;-i&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;myapp-&lt;/span&gt;&lt;span class="nv"&gt;$ENV&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;--extra-vars&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;env=&lt;/span&gt;&lt;span class="nv"&gt;$ENV&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;playbooks/myapp/web-server.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Here we set an OS environment var &lt;code&gt;ENV=prod&lt;/code&gt; which pulls in a separate
inventory (which could also use a dynamic inventory script) and set the
Ansible &lt;code&gt;env&lt;/code&gt; var which loads right var files.&lt;/p&gt;
&lt;p&gt;If you need to set up an instance per customer, you can have
&lt;code&gt;vars/myapp/a/app.yml&lt;/code&gt; and &lt;code&gt;vars/myapp/b/app.yml&lt;/code&gt;. And you can
share common vars like &lt;code&gt;var/myapp/common/app.yml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In all of these, the playbooks don't use a lot of conditional vars, though they
can if necessary distinguish between e.g. dev and prod. Generally it's best to
use roles with default vars set in e.g.
&lt;code&gt;roles/ubuntu-common/defaults/main.yml&lt;/code&gt;. The tasks can use something like
&lt;code&gt;when: '"env == prod"'&lt;/code&gt; or you can conditionally include a role as shown above.&lt;/p&gt;</content><category term="DevOps"/><category term="ansible"/><category term="deployment"/></entry><entry><title>What makes a language popular?</title><link href="https://www.cogini.com/blog/what-makes-a-language-popular/" rel="alternate"/><published>2018-01-27T00:00:00+08:00</published><updated>2018-01-27T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-01-27:/blog/what-makes-a-language-popular/</id><summary type="html">&lt;p&gt;In the recent &lt;a href="https://research.hackerrank.com/developer-skills/2018/"&gt;HackerRank developer
survey&lt;/a&gt;, we can see
"Which languages do employers look for by industry?" and "Which languages are
developers planning to learn next?" In terms of popularity, there is a definite
swing to JavaScript and Python. In terms of mind share with language
enthusiasts, not so much …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In the recent &lt;a href="https://research.hackerrank.com/developer-skills/2018/"&gt;HackerRank developer
survey&lt;/a&gt;, we can see
"Which languages do employers look for by industry?" and "Which languages are
developers planning to learn next?" In terms of popularity, there is a definite
swing to JavaScript and Python. In terms of mind share with language
enthusiasts, not so much.&lt;/p&gt;
&lt;p&gt;I don't really care about absolute popularity. What I need is tools so I can
make good systems, combined with sufficient popularity for a healthy ecosystem.
The biggest challenge for programming languages right now is how to handle
concurrency and take advantage of multiple CPU cores without going insane.
We are also seeing a swing from dynamic languages towards strongly typed
functional languages like Haskell and OCaml.&lt;/p&gt;
&lt;p&gt;Over the last 20 years we went from low level compiled languages like C++ to
Java, then scripting languages like Perl, Ruby, and Python. Productivity of the
dynamic languages was much higher, but they were slower and more susceptible to
failures at runtime, so we needed lots of tests. The type inference in modern
functional languages gives us a good combination of productivity, performance,
and safety.&lt;/p&gt;
&lt;p&gt;There is a balance between mass adoption and language power.
In order to be successful, we need to have some combination of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Accessibility for beginning and average programmers&lt;/li&gt;
&lt;li&gt;A business model which drives investment in the platform&lt;/li&gt;
&lt;li&gt;Language features which support "programming in the large" and
  productive frameworks&lt;/li&gt;
&lt;li&gt;A good community of fellow programmers and jobs&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;JavaScript is the ultimate weak language, but it's extremely popular because
it's accessible. It has a business model behind it which drives the platform.
As the browser makers speed up the runtime, Node.js benefits on the server
side. Google tried to get advanced features into the language, but couldn't
reach consensus with the other browser platforms. JavaScript is slowly getting
classes, optional typing and syntactic sugar. It won't get macros or advanced
features which require strong typing, though, as it would break compatibility
with the existing web and make it hard for beginners to learn. It's most
interesting as a compilation target for advanced languages like
&lt;a href="http://elm-lang.org/"&gt;Elm&lt;/a&gt; and &lt;a href="https://reasonml.github.io/"&gt;Reason&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Python is one of my main languages, but it's having trouble evolving. I was
disappointed when Python 3 was released, as it didn't have enough new features
to justify switching large production code bases. Guido was hostile to
functional style programming, e.g. map/reduce. Based on its growing success,
he was probably correct, but I don't expect significant improvements in the
language. It will be my scripting language of choice and useful for data science,
but not for building big systems.&lt;/p&gt;
&lt;p&gt;Java was really good at getting people to adopt it, and it wasn't by being
powerful, it was by being more accessible and easier to use.&lt;/p&gt;
&lt;p&gt;"We were not out to win over the Lisp programmers; we were after the C++
programmers. We managed to drag a lot of them about halfway to Lisp."
- Guy Steele, Java spec co-author and Lisp pioneer&lt;/p&gt;
&lt;p&gt;The Java VM has incredible amounts of engineering behind it. They started with
object orientation, then added support for dynamic languages. By building on
the Java VM, languages like Clojure and Scala get a good runtime and are not
seen as being too risky. It is probably the future of Ruby. Java has potential
as the "one VM to rule them all", unless Oracle screws it up. Microsoft .NET is
the same, they have this nice VB.NET / C# / F# thing going to combine
accessibility with power. It's a nice system, but too proprietary for my tastes.
I have seen too much bad behavior from Microsoft in my life to invest in the
platform.&lt;/p&gt;
&lt;p&gt;Haskell is very powerful, but the academic terminology makes it difficult
for beginning and workaday programmers. I am very interested in it myself, but I
don't think it will be a mainstream language. It can be a nice "secret weapon",
though, and it influences practical languages like Elm. OCaml gives us
languages like Reason and F# on popular platforms. Its level of typing may be
the right balance between strictness and getting things done. It's low level
enough for systems programming, while keeping safety.&lt;/p&gt;
&lt;p&gt;Erlang provides a very interesting language data point, as everything is
there for practical reasons. It is functional because that's the way you make
reliable and scalable systems. It does not do as much with types as I might
like, compared to Haskell, but that makes it easy to understand.&lt;/p&gt;
&lt;p&gt;Erlang has a solid VM with a business model supporting it. Ericsson uses it for
their telecom products, and companies like WhatsApp add features that they
need. It scales to use all our available CPU cores without drama. Mature tools
let us manage and debug servers in production, and we can easily build highly
available, highly scalable clusters. The platform has been been under
development for 30 years. It's not going away, it's getting better.&lt;/p&gt;
&lt;p&gt;Elixir brings the Erlang platform to a new generation of developers. It is
easy to get started, and adds powerful features like Lisp-style macros and
Clojure-style protocols. It takes the standard library functions from Erlang
and makes them consistent and easy to use.&lt;/p&gt;
&lt;p&gt;Elixir has a good chance at being a mainstream language for web and server side
applications. It is a logical next step for the Rails community, as the
platform hits its limits. It has a great community, and lots of libraries are
getting written for web development and more advanced applications.
It is the standard platform for building massive chat systems, and is
popular for next generation IoT, financial, and health care systems. The Nerves
platform makes it easy to build reliable embedded systems and appliances.&lt;/p&gt;
&lt;p&gt;If Elixir stays at its current level of adoption, that that's fine. In the
future I think we will see Erlang-style concurrency features in the JVM and
.NET platforms. As we saw with Twisted Python, however, it is really hard to
make existing code and libraries safe for concurrency. Every Java object is a
potential problem. The most likely result of better concurrency primitives in
the Java VM is probably Elixir being ported to it.&lt;/p&gt;</content><category term="Development"/><category term="elixir"/><category term="languages"/><category term="programming"/></entry><entry><title>KYC wall of shame</title><link href="https://www.cogini.com/blog/kyc-wall-of-shame/" rel="alternate"/><published>2018-01-14T00:00:00+08:00</published><updated>2018-01-14T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-01-14:/blog/kyc-wall-of-shame/</id><summary type="html">&lt;p&gt;There is a saying that frustration is an occupational hazard of being a user
experience designer (or an excessively logical engineer). Once you start
designing processes, you see process problems everywhere, whether or not you
want to. As an American living in Taiwan, I am used to being the weird …&lt;/p&gt;</summary><content type="html">&lt;p&gt;There is a saying that frustration is an occupational hazard of being a user
experience designer (or an excessively logical engineer). Once you start
designing processes, you see process problems everywhere, whether or not you
want to. As an American living in Taiwan, I am used to being the weird guy who
breaks the process. Lately, however, I have been having more than my usual
share of identity confusion (no jokes, please).&lt;/p&gt;
&lt;p&gt;Other than
&lt;a href="https://www.cogini.com/blog/paypal-know-your-customer-failure/"&gt;PayPal&lt;/a&gt;, which
was particularly bad, I won't name and shame, because the general state of the
art is pretty sad. There are lots of opportunities for startups to compete on
user experience.&lt;/p&gt;
&lt;h1&gt;Bank 1&lt;/h1&gt;
&lt;p&gt;My bank was sold to another bank (for the fourth time, now).  I had activated
my new card at the ATM, but that apparently wasn't enough, so they gave me a
call. The lady said, "For security, we need to verify your identity. What is
your birthday?" I was like, "Uh, no, that's not the way that security
verification works. Would you give your birthday to anyone who called you on
the phone?" We compromised, she gave me the year and month, and I gave her the
day.&lt;/p&gt;
&lt;p&gt;Next she wanted to set up a new phone banking PIN. But their phone system
doesn't recognize DTMF tones from mobile phones, so I couldn't do it.&lt;/p&gt;
&lt;h2&gt;Takeaways&lt;/h2&gt;
&lt;p&gt;There are some logic problems here. First, they need to get the fundamentals of
authentication right. It's hard enough to train users to avoid scams, we should
not make people think it's normal. Second, why call me on my mobile if your
system can't handle it?&lt;/p&gt;
&lt;h1&gt;Bank 2&lt;/h1&gt;
&lt;p&gt;I opened a bank account when I first arrived in Taiwan years ago. The bank's
systems required the customer's national id number. As a foreigner, I didn't
have one, so they created a fake number for me from my birth date and name.&lt;/p&gt;
&lt;p&gt;A year ago, my internet banking stopped working, and we had to switch the
account to use my alien registration certificate number. That's better, but
still a problem. The format of the ARC numbers is slightly different from
national id card numbers, so their validation logic fails. I had to use use my
wife's national id number for my login.&lt;/p&gt;
&lt;p&gt;It took about two hours sitting in the branch, as the staff diligently made
phone calls to people at the head office. At some points we almost lost hope
and closed the account, but eventually it worked. Recently, though, the bank's
systems changed, and the various parts of my account became dissociated. My ATM
card stopped working, saying that there was no bank account (another hour to
fix). Now the internet banking stopped working with a 500 error. At least the
paper account book still works....&lt;/p&gt;
&lt;p&gt;Better than this guy, I guess:&lt;/p&gt;
&lt;blockquote class="twitter-tweet" data-lang="en"&gt;&lt;p lang="en" dir="ltr"&gt;No
Emojis for your bank account name... 😂😂😂😂😂 &lt;a
href="https://t.co/S2wc5pZ2XZ"&gt;pic.twitter.com/S2wc5pZ2XZ&lt;/a&gt;&lt;/p&gt;&amp;mdash; Bud
(@this_is_bud) &lt;a
href="https://twitter.com/this_is_bud/status/735410748239794176?ref_src=twsrc%5Etfw"&gt;May
25, 2016&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async
src="https://platform.twitter.com/widgets.js" charset="utf-8"&gt;&lt;/script&gt;

&lt;h2&gt;Takeaways&lt;/h2&gt;
&lt;p&gt;Don't assume that everyone has a national id. Make your own unique identifier
and associate it with the user's id. How do you deal with foreigners?  What is
the key that links different systems in your organization? Is it the customer's
name? Their id number? Do you expect that number to never change? In some
countries the passport number is their national id number, in others it changes
when they renew their passport.&lt;/p&gt;
&lt;h1&gt;Bank 3&lt;/h1&gt;
&lt;p&gt;Another bank in Taiwan is verifying accounts for FATCA. I needed to fill out
the US W-9 form with my name. Of course, I actually had to fill it out three
times, with three names. One to match my US tax return, and two more for the
different ways they had broken my name on my bank account and credit card, e.g.
family name first, name chopped because it is too long.&lt;/p&gt;
&lt;p&gt;At some banks, my name is "MORR", because Chinese people have a maximum of four
characters in their name. I have learned that I can only make a wire transfer
to one bank during the day, because matching the account names requires human
attention. Otherwise, it fails with an obscure XML error.&lt;/p&gt;
&lt;h1&gt;Bank 4&lt;/h1&gt;
&lt;p&gt;We opened a bank account for my daughter, and my mother wired her some money.
When the money arrived, the bank rejected it because it didn't use her full
name, it had a middle initial. They wanted us to send the transaction back to
the US and do it again (paying the fees again, of course). I told them that if
we had to do that, we would close the account because they were too incompetent
to trust with money, and they relented.&lt;/p&gt;
&lt;h1&gt;Takeaways&lt;/h1&gt;
&lt;p&gt;The Know Your Customer and Anti-Money Laundering process would be a lot easier
if you let your customer actually use their real name. Of course this gets a
bit challenging, e.g. different scripts or Chinese names, but it certainly
makes people happy. So you should let people enter their real names, and add a
field for a transliterated version if necessary. Maybe allow them to have
multiple variations on their name. Is the goal to veriify that the name is
correct or to make sure it's unambiguous?&lt;/p&gt;
&lt;h1&gt;Bank 4&lt;/h1&gt;
&lt;p&gt;My corporate credit card has an enhanced verification process. Sometimes when
making online purchases, it bounces me to a page to verify my identity. It used
to ask me for my passport number (from a previous passport, but whatever).&lt;/p&gt;
&lt;p&gt;With no notice, the bank changed the system so that it required a code sent to
my mobile phone. Their credit card system didn't have the correct mobile
number, though, it had our office number in Hong Kong.  There was no place on
the web to see or change the phone number, I had to send them a letter by post,
which takes a week to process. So the effect was that I suddenly became unable
to make payments by corporate credit card.&lt;/p&gt;
&lt;p&gt;The bank redesigned their website. The new website is prettier, but doesn't
address the actual usability issues. If anything, it makes them worse, because
it fits less information on a single screen.&lt;/p&gt;
&lt;p&gt;As a business, our most fundamental issues are being able to reconcile payments
we receive and making outbound payments. The only information we get in
statements is "DEPOSIT" and "WITHDRAWAL". The bank only keeps 90 days of
transactions online, because, I guess, lines in a database are expensive.
That's a problem when we have questions on our annual accounting 18 months
later, though. So every month we copy out the transactions.&lt;/p&gt;
&lt;p&gt;They send us PDF equivalents of the paper statements, but they insist on
encrypting them with a crazy Java-based system that only works on Windows. If
an email doesn't get through, we have to pay them US$25 to mail us a duplicate
copy. The passphrase that encrypts the files suddenly stopped working at the
same time as their website changed. So now we can't open any old emails.
Instead of using the password protection built into the PDF standard, they
created a monster.&lt;/p&gt;
&lt;p&gt;Recently, they were collecting more information for their KYC process. There
was supposed to be a form on their website, but it disappeared in the rewrite.
So I had to email 100MB of scans of documents to them. In the course of it,
they asked me to provide the id of my partner, who I bought out 12+ years ago. It
seems, despite notifying the bank by postal mail (twice!), it didn't work,
so we will need to do it yet again. There is no acknowlegement of receipt of
documents.&lt;/p&gt;
&lt;h2&gt;Takeaways&lt;/h2&gt;
&lt;p&gt;Make sure that what you are relying on for authentication actually works.  They
could have sent a message via SMS to the mobile numbers before switching the
system over.&lt;/p&gt;
&lt;p&gt;Let people view and change things online.&lt;/p&gt;
&lt;p&gt;Pay attention to the fundamental things that your customers care about.&lt;/p&gt;
&lt;p&gt;As a business, I would love to get an "API" driven bank account so I never had
to go to their website at all.&lt;/p&gt;
&lt;h1&gt;Bank 5&lt;/h1&gt;
&lt;p&gt;We were trying to reconcile the bank statements for our Vietnam branch. The
internet banking site has a way to export statements. Their export files had an
XLS extension, but were actually HTML. When we got that figured out, we tried
to parse the line items, written in Vietnamese.  After the tenth regular
expression variation, we realized that this was not generated by a computer,
some human was entering the descriptions of the transactions for deposits and
withdrawals.&lt;/p&gt;
&lt;h1&gt;Phone company 1&lt;/h1&gt;
&lt;p&gt;In our Taiwan branch, the registration lists the "responsible person" (me) and
also a "branch manager." We moved to a new office, so I went to change the
billing address for a mobile phone. The clerk thought that because there were
two people on the registration, he had to have both managers there to approve
the change of address. It took half an hour of arguing to get him to understand
the difference between OR vs AND.&lt;/p&gt;
&lt;p&gt;In practice, what do they really care what the address is, as long as someone
pays the bill? Cue &lt;a href="https://www.youtube.com/watch?v=gWx6uA5aCrE"&gt;Mitch Hedberg&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I guess I should be happy they are authenticating the request. Despite everyone
adding SMS verification to their verification processes, it's not particularly
secure. It's often easy for an attacker to convince a mobile phone company that
I have lost my phone and need a new SIM, then they can intercept the SMS.&lt;/p&gt;
&lt;h1&gt;Phone company 2 + 3&lt;/h1&gt;
&lt;p&gt;Based on my previous professional experience with VoIP, when we set up the old
office, I tried to get a VoIP phone line. I thought I had succeeded, but when
they installed the system, it turned out that we had VoIP number attached to a
dedicated ADSL line. The only way they could legally sell us a phone number was
to also sell us a physical phone line.&lt;/p&gt;
&lt;p&gt;When we moved, we tried to transfer the phone number to our new office.  Unlike
mobile numbers, it turns out that the "VoIP" phone numbers could not be ported.
Same thing for our fax number. It was associated with the ADSL line, which was
in a different telephone exchange, and could not be ported.&lt;/p&gt;
&lt;p&gt;I decided that since we hadn't gotten a non-junk fax in the last year, that fax
is officially over. Losing the phone number was more annoying.&lt;/p&gt;
&lt;p&gt;Immediately after installing the new phone line, we started getting automated
scam phone calls from someone pretending to be the National Health Insurance
Administration, saying that there was fraud associated with my card. "Press 1
to talk with an operator." Talking with a foreigner broke the scammer's script
pretty fast...&lt;/p&gt;
&lt;h2&gt;Takeaways&lt;/h2&gt;
&lt;p&gt;What happens if your user loses their phone or changes their number? How will
you authenticate them?&lt;/p&gt;
&lt;p&gt;The bank has a mobile app. Why not use that to verify my identity?  I needed to
talk with customer service at my bank in the US. They had a button on their app
that said "Call Us." I thought, "Great! VoIP." Nope, it dialed the phone for me.
Of course, I was in Taiwan, so it didn't add an international prefix.  If you
already have a secure login on the phone, leverage it to get a secure
connection into the call center for talk. If you use chat, then you can provide
rich navigation instead of voice prompts, and avoid making the customer enter
their account number over and over.&lt;/p&gt;
&lt;p&gt;What kind of permissions does &lt;em&gt;your customer&lt;/em&gt; want to see for transactions?  In
my situation, any one manager should be enough. But if there were two partners,
then maybe it should require both of us. How would they know the difference?
How can you allow your customer to delegate responsibility for e.g. accounting
or making transactions? The boss is busy...&lt;/p&gt;
&lt;p&gt;Do your contracts specify fax as a legal notification mechanism? Does anyone
have a fax number anymore?&lt;/p&gt;</content><category term="Products"/><category term="kyc"/><category term="know your customer"/><category term="design"/></entry><entry><title>Debugging your space probe</title><link href="https://www.cogini.com/blog/debugging-your-space-probe/" rel="alternate"/><published>2018-01-03T00:00:00+08:00</published><updated>2018-01-03T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-01-03:/blog/debugging-your-space-probe/</id><summary type="html">&lt;p&gt;Years ago we were building an embedded vehicle tracker for commercial vehicles.
The hardware used an ARM7 CPU, GPS and GPRS modem, running uClinux.&lt;/p&gt;
&lt;p&gt;We ran into a tough bug in the initial application startup process. The program
that read from the GPS and sent location updates to the network …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Years ago we were building an embedded vehicle tracker for commercial vehicles.
The hardware used an ARM7 CPU, GPS and GPRS modem, running uClinux.&lt;/p&gt;
&lt;p&gt;We ran into a tough bug in the initial application startup process. The program
that read from the GPS and sent location updates to the network was failing.
When it did the console stopped working, so we could not see what was going on.
Writing to a log file gave the same results.&lt;/p&gt;
&lt;p&gt;This is unfortunately common for embedded systems. For normal programmers, if
your machine won't boot up, you are having a bad day. For embedded developers,
that's just a normal Tuesday, and your only debugging option may be staring at
the code and thinking hard.&lt;/p&gt;
&lt;p&gt;This board had no Ethernet and only three serial ports, one for the console,
one one hard wired for the GPS and one for the cellular modem. The ROM was
almost full (it had a whopping 2 MB of flash, 1 MB for the Linux kernel, 750 KB
for apps and 250 KB for storage). The lack of MMU meant no shared libraries, so
every binary was statically linked and huge. We couldn't install much else to
help us.&lt;/p&gt;
&lt;p&gt;A colleague came up with the idea of running gdb (the text mode debugger) over
the cellular network. It took multiple tries due to packet loss and high
latency, but suddenly we got a stack backtrace.  It turned out &lt;code&gt;printf()&lt;/code&gt; was
failing when it tried to print the latitude and longitude from the GPS, a
floating point number.&lt;/p&gt;
&lt;p&gt;One rule for normal programming is that if you think there is a compiler bug,
you are wrong -- it's a bug in your code. In this case, a few hours of debugging
and Googling five year old mailing list posts turned up a patch to gcc (never
applied), which fixed a bug on the ARM7 which affected uclibc.&lt;/p&gt;
&lt;p&gt;This made me think of how the folks who make the space probes debug their
problems. If you can't be an astronaut, at least you can be a programmer,
right? :-)&lt;/p&gt;</content><category term="Development"/><category term="embedded"/></entry><entry><title>Is it time for Lisp in DevOps?</title><link href="https://www.cogini.com/blog/is-it-time-for-lisp-in-devops/" rel="alternate"/><published>2018-01-02T00:00:00+08:00</published><updated>2018-01-02T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-01-02:/blog/is-it-time-for-lisp-in-devops/</id><summary type="html">&lt;p&gt;We have been working on a project migrating a big Rails app from physical
hardware to AWS, and I have been doing a lot of automation work.&lt;/p&gt;
&lt;p&gt;It strikes me how we are doing the same thing over and over with different
tools: reading variables, templating files and running semi-declarative …&lt;/p&gt;</summary><content type="html">&lt;p&gt;We have been working on a project migrating a big Rails app from physical
hardware to AWS, and I have been doing a lot of automation work.&lt;/p&gt;
&lt;p&gt;It strikes me how we are doing the same thing over and over with different
tools: reading variables, templating files and running semi-declarative logic.
All with more or less broken syntax, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Terraform / Terragrunt&lt;/li&gt;
&lt;li&gt;Ansible&lt;/li&gt;
&lt;li&gt;Shell scripts&lt;/li&gt;
&lt;li&gt;Packer JSON&lt;/li&gt;
&lt;li&gt;Jinja2&lt;/li&gt;
&lt;li&gt;CodeDeploy appspec.yml lifecycle scripts&lt;/li&gt;
&lt;li&gt;Capistrano&lt;/li&gt;
&lt;li&gt;Rake&lt;/li&gt;
&lt;li&gt;Cron specs&lt;/li&gt;
&lt;li&gt;CloudFormation&lt;/li&gt;
&lt;li&gt;Endless different config file syntaxes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;During the dotcom days we would laugh at the guys with title "HTML Programmer".
Now we are "YAML Programmers".&lt;/p&gt;
&lt;p&gt;This makes me think about replacing it all with a Lisp (probably Scheme),
following the "code is data" and "data is code" mantra. We would have lots of
parentheses, but we would have real variables with sane scoping rules, real
functions, proper syntax for function calls, and macros. Maybe we need the
YAML/JSON equivalent of &lt;a href="https://en.wikipedia.org/wiki/SXML"&gt;SXML&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Lisp is very powerful, but has not been able to break out. I learned it after
hearing smart people say, "learn Lisp, not because it's practical, but because
it will expand your brain and change the way you think about programming". I
have certainly found that to be true.  It has minimal syntax, which is one of
its biggest strengths, but tends to turn off newcomers.&lt;/p&gt;
&lt;p&gt;I think that part of the lack of success bad timing. When Lisp was at its peak,
we didn't have open source, compilers cost $5000/seat. Lisp was very powerful,
and faced competition from people selling FORTRAN for defense contracts who
played the game better.&lt;/p&gt;
&lt;p&gt;Another part was the "smug Lisp weenies". Lisp traditionally attracted smart
but antisocial people who loved to argue and were condescending to newbies.
They made it impossible to make progress on the legitimate improvements needed
to the language.&lt;/p&gt;
&lt;p&gt;Now we have open source tools and communities. There is a lot of interesting
stuff going on in Clojure and &lt;a href="https://racket-lang.org/"&gt;Racket&lt;/a&gt;, and people
are a lot more welcoming.&lt;/p&gt;
&lt;p&gt;The pendulum is swinging away from dynamic scripting languages. Go is popular
in ops, but I find it too low level, and it doesn't take advantage of recent
advantages in programming languages.  I mainly program in
&lt;a href="https://elixir-lang.org/"&gt;Elixir&lt;/a&gt;, which has the power of Lisp-style macros,
approachable syntax and a great community.&lt;/p&gt;
&lt;p&gt;Maybe it's time to give Lisp another try. I could write a Scheme syntax for
Terraform. Or maybe I will take a swing at a Terraform clone in Elixir...&lt;/p&gt;</content><category term="DevOps"/><category term="lisp"/><category term="languages"/><category term="ansible"/><category term="terraform"/></entry><entry><title>PayPal Know Your Customer failure</title><link href="https://www.cogini.com/blog/paypal-know-your-customer-failure/" rel="alternate"/><published>2018-01-02T00:00:00+08:00</published><updated>2018-01-02T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2018-01-02:/blog/paypal-know-your-customer-failure/</id><summary type="html">&lt;p&gt;Applying for a merchant account so you can accept credit cards traditionally
takes weeks. You meet with the bank, show them your financial statements, and
explain your business. Then they make you an offer for e.g. 2.8% + $0.30 per
transaction (plus other mystery fees that you find …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Applying for a merchant account so you can accept credit cards traditionally
takes weeks. You meet with the bank, show them your financial statements, and
explain your business. Then they make you an offer for e.g. 2.8% + $0.30 per
transaction (plus other mystery fees that you find out about later). They may
require you to keep money in your account at all times, or only pay you 30 days
after your customer pays.&lt;/p&gt;
&lt;p&gt;When we did our first SaaS product 12 years ago, our bank said, "Oh, you are
doing business on the &lt;em&gt;Internet&lt;/em&gt;, so you are in our 'high risk' category. We
will charge 5% and you have to keep US$20K in your account." We were like, "Did
you just tell me to go screw myself? I guess so."&lt;/p&gt;
&lt;p&gt;The fundamental thing to understand is that because of consumer protection
laws, if you fail to deliver, the bank is responsible for refunding the money
to the customer. That makes them conservative.&lt;/p&gt;
&lt;p&gt;Once you are approved, as long as your business matches what you described, you
won't have problems unless you have an unusual number of returns or chargebacks.
Eventually your volumes get higher, and you can renegotiate.&lt;/p&gt;
&lt;p&gt;PayPal works differently. It's easy to get an account, you just sign up and
start receiving money. It's a breath of fresh air. If their anti-fraud algorithms
trigger for whatever reason, however, they lock your account and the review
process starts. They ask you to explain what the money is, send them
company documents, etc. The review is done at the end of the process, not the
beginning, and it can get really ugly.&lt;/p&gt;
&lt;p&gt;This has given PayPal a rocky reputation with entrepreneurs and startups. You
hear horror stories about companies bumping along with a moderate amount of sales, then
they get profiled in TechCrunch and PayPal shuts them down because of the
"suspicious" increase in sales. They lock your money for as much as six months,
and maybe just keep it forever. You are the collateral damage of PayPal's
algorithms, and you can't get a human to fix it. They don't care, because it's
a numbers game to them.&lt;/p&gt;
&lt;p&gt;There are certain business that they won't accept, e.g. conference
registration. There are other rules, e.g. you need to ship physical products
immediately after getting the order. So if your supplier is out of stock and
someone complains, you get shut down. Choosing PayPal because it's easy to get
started ends up causing you a lot of trouble later.&lt;/p&gt;
&lt;p&gt;When we did the &lt;a href="https://www.phdmovie.com/"&gt;PhD Movie&lt;/a&gt; sales
site, we took this very seriously. Jorge was releasing a movie related to his
popular &lt;a href="http://phdcomics.com/"&gt;PhD Comics&lt;/a&gt;. As soon as the
announcement went out to his mailing list, we would have tens of thousands of
people buying the movie immediately. Not only would the server need to handle
the load, we were afraid that PayPal would shut us down. We would have an
embarrassing customer experience and potentially lose the sales forever. Because of
this, we implemented multiple payment processors. Fortunately things went off
without a hitch, but it was nerve wracking.&lt;/p&gt;
&lt;h2&gt;Knowing Your Customer&lt;/h2&gt;
&lt;p&gt;The big story in banking these days is Know Your Customer (KYC) and Anti-Money
Laundering (AML). Customers of ours like &lt;a href="https://emq.com/"&gt;EMQ&lt;/a&gt;
put a tremendous amount of work into getting it right. There are existential
penalties from governments if they don't follow the regulations, but the
process has the potential to give a really bad customer experience.&lt;/p&gt;
&lt;p&gt;Recently all my banks have been upgrading their KYC and FATCA compliance,
and I have been seeing the KYC process from the other side. As an American
living in Taiwan, with company headquarters in Hong Kong, I am always a weird
case.&lt;/p&gt;
&lt;p&gt;My recent experience with PayPal shows how it can all go badly wrong.&lt;/p&gt;
&lt;h2&gt;PayPal&lt;/h2&gt;
&lt;p&gt;When I signed up for our company PayPal account in 2009, it was uneventful.
We started taking payments for our hosting business and software development
projects. We used PayPal to make payments. We became a business verified user.&lt;/p&gt;
&lt;p&gt;About two years ago, PayPal started sending me emails in simplified Chinese.
Not my favorite thing, but the system was working.&lt;/p&gt;
&lt;p&gt;About six months ago, our account was suddenly restricted, and we could only
transfer out $2000 per month, about 25% of our normal amount. They required us
to verify our account. We sent them documents and they released the restrictions.&lt;/p&gt;
&lt;p&gt;About a month ago, things got serious. We were restricted again. I gave
them all our documents, but there was a hitch. They didn't accept our address
in Hong Kong, saying that we had to enter an address in China.&lt;/p&gt;
&lt;p&gt;One entertaining point was entering my home address in Taiwan and getting an
error about an invalid postal code. I had to enter it in a different format, as
if I was sending a letter from China.&lt;/p&gt;
&lt;p&gt;I called customer service multiple times to no avail. Their compliance team
team doesn't talk to customers, and their customer service team has no power to
do anything. They would say that they had sent messages to the compliance
team, but I was still getting an email (or 10) every day telling me that I had
to verify the account or it would be shut down. I uploaded more documents and got
more form emails saying that the address was not in China. I asked the
compliance team questions, but never got a human response.&lt;/p&gt;
&lt;p&gt;Finally someone investigated, and told me that the company had been set up in
China at the beginning. (I think that it was actually a data migration error at
some point, but that's just a guess.)&lt;/p&gt;
&lt;p&gt;He said that our address was in their system as:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Room 1005, Allied Kajima Building
138 Gloucester Road, Wanchai
Hong Kong SAR, China
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Because it ends with China, we must be a Chinese company. But simultaneously,
this address was not considered a Chinese company for purposes of compliance.
Is Hong Kong part of China? Is Taiwan part of China? (These questions are above
my pay grade.) We had a China PayPal account, not a Hong Kong PayPal account,
and it is &lt;em&gt;impossible&lt;/em&gt; to change it (Really? Errors cannot be corrected?).&lt;/p&gt;
&lt;p&gt;Their only solution was for us to delete the account and create it again. That
would leave us as a blank slate, though, with no history. The anti-fraud
systems would inevitably kick in, our account would be limited, and we would not
be able to use it to process our current payment volume.&lt;/p&gt;
&lt;p&gt;Then the tune changed, and they said that we could submit a formal affidavit
from a Chinese company describing their relationship with us. I asked them what
kind of relationship they wanted: landlord, customer, vendor? Should I find a
company in China and pay them $100 to write a letter saying that we are, &lt;em&gt;ipso
facto&lt;/em&gt;, their customer? If I am going to be making something up, what did they
want? Is this really the way KYC is supposed to work?&lt;/p&gt;
&lt;p&gt;They would say that someone would call me back and then not do it. They were
unable to call Taiwan mobile phone numbers reliably, so they would register an
unsuccessful "attempt" to call but not try again (the classic failure mode of
working to get a task off their todo list without actually serving the
customer).&lt;/p&gt;
&lt;p&gt;Then someone investigated more and said that when I created the account
I was in Taiwan (yes!). (It's good to record the IP address when users
register.) But somehow I was supposed to have intentionally created it as a
China company (nope). Classic "blame the customer" approach. Of course, if it
was true, wouldn't it be KYC problem if they let me create an account in China?
It should be a Taiwan account, if not a Hong Kong corporate account.&lt;/p&gt;
&lt;p&gt;In the end they restricted our account again, and we are no longer a PayPal
customer. Ironically, this was a pure Know Your Customer failure on their
part. The information I provided them was complete, correct, and hasn't changed
since the start. They just made a mistake somewhere in their systems, and were
organizationally incapable of fixing it.&lt;/p&gt;
&lt;p&gt;Often when you see incompetence like this, it's because it's in a company's
business interest to be incompetent. Not in this case, though, as they lose
hundreds of dollars a month in fees. We set up wire transfer agreements with
our big vendors, saving us money.&lt;/p&gt;
&lt;h2&gt;Opportunities&lt;/h2&gt;
&lt;p&gt;We may think of PayPal as not having a physical location, they live "on the
Internet". This experience shows clearly, though, that PayPal is subject to
the rules of different countries, but unable to deal with the real complexity
of international users.&lt;/p&gt;
&lt;p&gt;There is a big opportunity for startups to use cryptocurrencies to handle
payments in a fundamentally better way. They can compete on service, dealing
with country-specific KYC and fiat currency issues. It can certainly be done
with a better customer experience than PayPal.&lt;/p&gt;</content><category term="Products"/><category term="kyc"/><category term="paypal"/><category term="failure"/><category term="cryptocurrencies"/><category term="design"/></entry><entry><title>Secure web applications with GraphQL and Elixir</title><link href="https://www.cogini.com/blog/secure-web-applications-with-graphql-and-elixir/" rel="alternate"/><published>2017-12-30T00:00:00+08:00</published><updated>2017-12-30T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2017-12-30:/blog/secure-web-applications-with-graphql-and-elixir/</id><summary type="html">&lt;p&gt;In traditional applications, the web application talks directly to the
database. It has rights to do anything, relying on application rules
to control access. If an attacker compromises it, then they can do
anything, e.g. grab all the data or create a funds transfer transaction.&lt;/p&gt;
&lt;p&gt;When security is critical …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In traditional applications, the web application talks directly to the
database. It has rights to do anything, relying on application rules
to control access. If an attacker compromises it, then they can do
anything, e.g. grab all the data or create a funds transfer transaction.&lt;/p&gt;
&lt;p&gt;When security is critical, e.g. in health care and financial services
applications, there are benefits to separating the application that interacts
with users from the back end data using a well defined API.&lt;/p&gt;
&lt;p&gt;In health care applications, it's common to have users with different roles
looking at the same information (patient, family member, nurse, doctor, admin).
Those rules can be very complex, and bugs may leak information. The data that
a user can access depends on their roles and relationships. A family member can view a
patient's medical information once they have been authorized. A doctor in a
clinic can view today's appointments and active cases. A specialist can view
the cases that they have been referred.&lt;/p&gt;
&lt;p&gt;A banking customer can view their own account and make transactions, and they
may give third parties access to data. Inside the bank, we need to control
staff access to data based on their roles, e.g. only staff handling KYC
should have access to scans of IDs.&lt;/p&gt;
&lt;p&gt;Using an API lets us clearly define operations and the permissions needed to
execute them. We tie the operations to a user, and the API server ensures that
they have rights to access data. This clean interface provides a
central place for access control and audit trail. It is easier to understand
and to test. There is a single API to access the data shared between
web front end, mobile API and other services.&lt;/p&gt;
&lt;p&gt;Sounds great, but doesn't it have a lot of overhead? Not with GraphQL and Elixir.
We originally started using GraphQL for mobile APIs, then realized that it was
also great for security.&lt;/p&gt;
&lt;h2&gt;API design using REST and GraphQL&lt;/h2&gt;
&lt;p&gt;It is popular to build APIs using
&lt;a href="https://en.wikipedia.org/wiki/Representational_state_transfer"&gt;REST&lt;/a&gt;, modeling
our systems in terms of "resources," e.g. users, accounts, medical cases,
transactions. We then express all actions in terms of create, read, update,
and delete operations on those resources.&lt;/p&gt;
&lt;p&gt;While conceptually simple, REST can be quite "chatty," requiring a lot of
requests to build complex pages. We might make one request to get a list of
patients, then one per patient to get their details. We make another request
for open cases associated with each patient, then another to get the case
details.&lt;/p&gt;
&lt;p&gt;Things that would be joins in a relational database end up being multiple
web requests. If the requests are all local, the performance is not too bad, but it
certainly adds up. In a mobile context, round trips can be very slow, and
complex pages can take seconds to load.&lt;/p&gt;
&lt;p&gt;REST apps are supposed to use HTML-style links in messages, allowing the
application to navigate between resources. Many "REST" applications don't
follow the principles, however, they are just ad-hoc blobs of poorly defined
JSON delivered over HTTP.&lt;/p&gt;
&lt;p&gt;In REST, there is no standard way to specify search parameters, filtering or
subsets of a resource's fields. We might want to restrict sensitive fields
based on user role. An app listing cases may end up getting the body of each
case and throwing it away, only to fetch it again when the user opens the case.
Developers have to write custom code to handle and validate parameters. We end
up with multiple versions of APIs which differ only in the number of fields.&lt;/p&gt;
&lt;p&gt;In order to avoid making multiple requests to the back end, mobile app
developers may use "view APIs," e.g. a &lt;code&gt;/home-page&lt;/code&gt; API which gets all the data
for the home page in one shot. This results in an explosion of API functions
and versions as we add pages and data fields to the front end.&lt;/p&gt;
&lt;p&gt;GraphQL was invented at Facebook to solve this problem. The client sends a
query identifying the objects it wants, as well as fields in associated objects.
The GraphQL API server returns all the data in a single request. It can
authenticate the user and check their access permissions, filtering the
result set to ensure that they only see what they should.&lt;/p&gt;
&lt;p&gt;For example, here is a query for an article summary list:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;{
  article {
    title
    published_at
    author {
      name
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Standard schemas define a data model, allowing the framework to handle
validation without hand coding. It handles filtering, selection and paging in
a standard way. It's not necessary to force complex actions into the REST
resource model, as GraphQL supports named operations with well defined parameters.&lt;/p&gt;
&lt;p&gt;It also has a standard mechanism to publish real-time event messages between
parts of the system, using the same schemas to define the structure. A client
can select cases and display them, then subscribe to see new cases as they
are created.&lt;/p&gt;
&lt;h2&gt;Access control&lt;/h2&gt;
&lt;p&gt;Every request has a user context associated with it, represented by an
access token.&lt;/p&gt;
&lt;p&gt;On the web, when a user logs into the system, they pass their username /
password to the front end, which calls the API to authenticate the user. The
back end verifies the information and returns the token. The front end stores the
token in the user's session and uses it on subsequent requests.&lt;/p&gt;
&lt;p&gt;Mobile applications work the same way, calling the same API and storing the
token on the device while the session is active.&lt;/p&gt;
&lt;p&gt;Rich front end apps running in the browser can talk directly to the GraphQL
server, bypassing the front end web server entirely while sharing the login
session.&lt;/p&gt;
&lt;p&gt;If an attacker compromises the front end machine, then all they can do is
execute operations as currently active users. They can only see a small subset
of the data, and they lose access when the sessions expire.&lt;/p&gt;
&lt;h2&gt;Elixir for the win&lt;/h2&gt;
&lt;p&gt;We use the &lt;a href="http://absinthe-graphql.org/"&gt;Absinthe&lt;/a&gt; GraphQL server, written in
the Elixir programming language. It handles GraphQL queries along with our own
custom application logic, combining traditional web development and GraphQL
services on the same platform (Phoenix).&lt;/p&gt;
&lt;p&gt;Modern stateful-web applications use Web Sockets or HTTP/2, making the user
interface more interactive and powerful. Phoenix Channels let us combine
web, mobile and other data sources like IoT using the same system. The Erlang
platform can easily handle the load, while staying manageable and secure.&lt;/p&gt;
&lt;h2&gt;Integration&lt;/h2&gt;
&lt;p&gt;The GraphQL server provides a common interface to multiple back end servers.
We can even make a single query resolve each field to a different back
end server, combining the results into one response.&lt;/p&gt;
&lt;p&gt;When interfacing with a REST back end, we can take advantage of the &lt;code&gt;Repo&lt;/code&gt;
application pattern used by Elixir's Ecto db library, but talking HTTP.
That fits into the standard Phoenix structure, allowing easy filtering of
queries via input parameters. For example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ow"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;GitHub.Issue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="ss"&gt;select&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;comments&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="ss"&gt;where&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;elixir-ecto/ecto&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ow"&gt;and&lt;/span&gt;
&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;open&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ow"&gt;and&lt;/span&gt;
&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Kind:Feature&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ow"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="ss"&gt;order_by&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;desc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;:comments&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="nc"&gt;Repo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Introducing Ecto.Multi&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Support map update syntax&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Create test db from development schema&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Provide integration tests with ownership with Hound&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="Development"/><category term="security"/><category term="graphql"/><category term="elixir"/><category term="architecture"/></entry><entry><title>Incrementally migrating a legacy app to Phoenix</title><link href="https://www.cogini.com/blog/incrementally-migrating-a-legacy-app-to-phoenix/" rel="alternate"/><published>2017-12-25T00:00:00+08:00</published><updated>2017-12-25T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2017-12-25:/blog/incrementally-migrating-a-legacy-app-to-phoenix/</id><summary type="html">&lt;p&gt;Over the years we have done lots of projects where we migrated an application
from one platform to another. We might do this to solve performance issues
or to switch to a better technology stack. This can be a challenge when
you have a big app that is in production …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Over the years we have done lots of projects where we migrated an application
from one platform to another. We might do this to solve performance issues
or to switch to a better technology stack. This can be a challenge when
you have a big app that is in production, and you need to do it incrementally.&lt;/p&gt;
&lt;p&gt;It depends, of course, on the specific technologies and application
functionality, but the process is overall quite similar:&lt;/p&gt;
&lt;h2&gt;Analyze your existing system and prioritize the work&lt;/h2&gt;
&lt;p&gt;Before starting, analyze the logs from your existing system. That will
give you information about the parts which are having failures and performance
problems. You may be surprised about what kind of traffic you are getting,
e.g. bots or broken clients.&lt;/p&gt;
&lt;p&gt;You can add the &lt;code&gt;$request_time&lt;/code&gt; variable to your &lt;a href="https://www.cogini.com/blog/serving-your-phoenix-app-with-nginx/"&gt;Nginx log
config&lt;/a&gt; to
get information about how long it took to handle each request, identifying
bottlenecks.  Look at the slow query logs in your database to see problematic
queries, you will need to deal with them in the new system as well.&lt;/p&gt;
&lt;p&gt;There may be performance problems caused by bad HTTP/HTML practices on the
legacy system, e.g. lack of JS/CSS asset consolidation.  Use the network
analysis tool in Chrome to see what assets the page is loading.
Spending a bit of time making sure that static assets are served from
Nginx or a CDN can help with performance during the transition with
low risk.&lt;/p&gt;
&lt;p&gt;Implementing production monitoring. That will help you stay on top of errors
while your are transitioning and make sure that things are working properly. It
will show you 404 errors from broken links, etc, as you roll out the new
system.&lt;/p&gt;
&lt;p&gt;The result is a better understanding of how your legacy system works
and a list of priorities for things to fix.&lt;/p&gt;
&lt;h2&gt;Put the app behind a reverse proxy&lt;/h2&gt;
&lt;p&gt;Put both apps behind a proxy such as Nginx and use routes to direct traffic to one app or the
other, allowing the two apps co-exist. You might have the API on
&lt;code&gt;api.example.com&lt;/code&gt;, or you might direct certain URL prefixes to Phoenix while
keeping the rest in the legacy app. A good example is splitting off the API
requests on &lt;code&gt;/api&lt;/code&gt; for performance while keeping the user registration and/or admin
pages in the old app.&lt;/p&gt;
&lt;h2&gt;Write the new or replacement features in Elixir/Phoenix&lt;/h2&gt;
&lt;p&gt;If the app is REST based, then you can generate standard REST
controllers/routes. You can also take advantage of &lt;code&gt;plugs&lt;/code&gt; to implement common
logic like authentication, avoiding repetition.&lt;/p&gt;
&lt;p&gt;Create Ecto database schemas for your legacy tables. After you switch
to Phoenix, you will want to have database migrations, but during the
transition period it's probably enough to just have a db snapshot that
you can talk to.&lt;/p&gt;
&lt;h2&gt;Write tests&lt;/h2&gt;
&lt;p&gt;Tests give you the confidence to make changes and know that things are working
properly.&lt;/p&gt;
&lt;p&gt;If you are making an exact replacement, then you have an opportunity to verify
that the old code and new code behaves the same. This is particularly easy for
API endpoints, because there is no UI to get in the way. Just collect inputs
and expected outputs from your legacy system and turn them into ExUnit test
cases. A logging or caching HTTP proxy can help with this.&lt;/p&gt;
&lt;h2&gt;Share authentication between the apps&lt;/h2&gt;
&lt;p&gt;This allows users log in on one system and access pages on the other.&lt;/p&gt;
&lt;p&gt;For an API, it's generally straightforward, e.g. we just validate an API key,
though it might involve something like OAuth2. In any case, the new system
doesn't have to interact much with the legacy system.&lt;/p&gt;
&lt;p&gt;For a web app, it can mean getting into the guts of the session mechanism on
your legacy system. Most commonly, that involves getting a session id from a cookie
and looking it up in a db, Redis or Memcached. Then you may need to parse the
data stored in the session, e.g. to extract the user id and use it to
authenticate the user.&lt;/p&gt;
&lt;p&gt;This can get ugly if the legacy system is using a language-specific
serialization format for its convenience, e.g. &lt;a href="http://php.net/manual/en/function.serialize.php"&gt;PHP
serialization&lt;/a&gt;.
We wrote a parser for the PHP serialization format in Erlang years ago... if
there is interest, I will share it.&lt;/p&gt;
&lt;p&gt;If you control the source system, then you may be able to modify it to make
your life easier. For example, you could switch to a standard format for
session data like JSON. If the legacy system is using a proprietary session
store, you can likely switch it to use a database (e.g. MySQL or Memcached),
allowing both systems to talk to it.&lt;/p&gt;
&lt;p&gt;Another option is to set a new cookie on login just for interop, e.g. after
the user logs in, write a &lt;a href="https://jwt.io/"&gt;JWT&lt;/a&gt; which has the user id.
Then the Elixir side can grab the data easily using a library like
&lt;a href="https://github.com/bryanjos/joken"&gt;Joken&lt;/a&gt; or &lt;a href="https://github.com/bryanjos/plug_jwt"&gt;PlugJwt&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Things may be more urgent if you find that the legacy application was not using
secure password practices, e.g. it is storing passwords in clear text. In that case, it
might make sense to first migrate the login process to Phoenix and fix the
security issues, while creating sessions which are compatible with the legacy
system. As you transition, you may need to expire sessions, forcing people to
log in again. This lets you e.g. update from MD5 hashes to bcrypt, capturing
their password, generating the new hash and updating the database.&lt;/p&gt;
&lt;p&gt;In an enterprise environment, where you have a lot of systems that may need to
interop for a long time, then you can use a single-sign-on system like SAML.&lt;/p&gt;
&lt;h2&gt;Convert the page layout and navigation&lt;/h2&gt;
&lt;p&gt;For web apps, you may want to have some pages in the new app and some in
the legacy app, allowing users to seamlessly work between both apps. The only
thing the user will notice is that the Phoenix pages are 10x faster :-).&lt;/p&gt;
&lt;p&gt;To do that you need to take the page layout and convert it to Phoenix
format, including compatible navigation links.&lt;/p&gt;
&lt;p&gt;That may be straightforward, but probably you want to update the style on the
new system, and it doesn't make sense to spend too much time on the old design.
It can take a surprisingly large amount of time to incrementally update the
graphical design on an existing site. It's faster and more predictable to
implement a new template and graphical design, so we need a transition
plan that spends the minimum amount of work that will be thrown away when we
update the design later.&lt;/p&gt;
&lt;p&gt;For one legacy site written in Symfony, the original templates were a mess, so
we just did "File | Save Page As" in the browser to create the HTML template in
all of its horrible glory. We ran it through tidy to make it valid XHTML, then
roughly cut it into templates with header, body and footer.&lt;/p&gt;
&lt;p&gt;There are Elixir implementations of the template syntax of many different systems.
For example you could use &lt;a href="https://github.com/erlydtl/erlydtl"&gt;Django templates&lt;/a&gt; to convert
templates, integrating with Phoenix using &lt;a href="https://github.com/andihit/phoenix_dtl"&gt;PhoenixDtl&lt;/a&gt;.
Long term you are probably better off just converting templates to EEx, though.&lt;/p&gt;</content><category term="Development"/><category term="elixir"/><category term="phoenix"/><category term="migration"/><category term="devops"/><category term="process"/><category term="architecture"/></entry><entry><title>Abuse and cryptocurrency business models</title><link href="https://www.cogini.com/blog/abuse-and-cryptocurrency-business-models/" rel="alternate"/><published>2017-12-23T00:00:00+08:00</published><updated>2017-12-23T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2017-12-23:/blog/abuse-and-cryptocurrency-business-models/</id><summary type="html">&lt;p&gt;When I design systems, one of my favorite things is looking at "abuse cases"
which define how they behave when confronted by bad actors.&lt;/p&gt;
&lt;p&gt;I am a big fan of cryptocurrencies. They give us an opportunity to design
systems which enforce and incentivize behaviors, e.g. removing risk and
rewarding …&lt;/p&gt;</summary><content type="html">&lt;p&gt;When I design systems, one of my favorite things is looking at "abuse cases"
which define how they behave when confronted by bad actors.&lt;/p&gt;
&lt;p&gt;I am a big fan of cryptocurrencies. They give us an opportunity to design
systems which enforce and incentivize behaviors, e.g. removing risk and
rewarding good behavior. It's important to recognize that these are new,
unregulated markets, though, and some of the protections from bad actors that
we expect aren't there unless we build them in.&lt;/p&gt;
&lt;h2&gt;A replacement for credit cards?&lt;/h2&gt;
&lt;p&gt;If we think of Bitcoin as simply being a decentralized replacement for credit
cards, then we may forget the consumer protections which are built into the
current credit card system.&lt;/p&gt;
&lt;p&gt;If I am a merchant selling products from my online store, criminals can buy my
products with a stolen credit card. When we charge the card, it looks good, and
we ship the products. A month later the card holder gets their bill and disputes
the transaction. We have to refund their money and take the loss, or we get in
trouble with the credit card company and lose our ability to make sales.&lt;/p&gt;
&lt;p&gt;From the merchant's perspective, one of the nice things about Bitcoin is that
transactions are final. If there is fraud, where is my incentive to refund the
money? This goes against consumer expectations and consumer protection laws.&lt;/p&gt;
&lt;p&gt;There is a very interesting opportunity to rate vendors and consumers to deal
with this situation, but it's not part of Bitcoin.&lt;/p&gt;
&lt;h2&gt;A lottery?&lt;/h2&gt;
&lt;p&gt;In most lotteries, the players are ok with the payouts not matching the
inputs. Governments take advantage of this to fund various things, and, as my
economist friend says, "lotteries are a way for the government to convert
post-tax money into pre-tax money".&lt;/p&gt;
&lt;p&gt;The winners are happy with this situation, as they get a windfall. Some people
say that a lottery is a tax on people that can't do math, but even the losers
gain something they value, the hope of winning.&lt;/p&gt;
&lt;p&gt;Lotteries tend to be very heavily regulated due to abuse. In a legal lottery,
there can be a complete accounting of the money going in and going out.
It's actually a perfect application for a blockchain system.&lt;/p&gt;
&lt;p&gt;Years ago there was no government lottery in Taiwan. Chinese people like to
gamble, though, so the mob offered an underground lottery based on the Hong
Kong government lottery. They sold tickets to Taiwanese customers, then when
the Hong Kong lottery announced its winners, it would use their numbers and
pay out.&lt;/p&gt;
&lt;p&gt;There was no accountability in the first place for what percentage got paid
out, and there was an interesting twist: if you won, then they would negotiate
with you about how much you would actually receive. Getting out your winnings
was a lot harder than putting in your money, but the people who won were still
happy.&lt;/p&gt;
&lt;h2&gt;A gambling platform?&lt;/h2&gt;
&lt;p&gt;A company once contacted us for a quote on a website for online sports betting.
We designed a system and gave them an estimate to build it. They said, however,
that they had found an open source package that would do the job, written in
PHP. We had a look at the code and found a lot of issues with accuracy, error
handling, etc. I told them, but much to my surprise, they were unconcerned.&lt;/p&gt;
&lt;p&gt;In a standard gambling system, the odds are a mathematical reflection of how
people bet, e.g. 2:1 odds means that twice as many people are betting for one
team to win compared to the other. The house makes their money by taking a
percentage of the bids / payouts. In sports, though, it's common for people to
bet for their home team whether or not they actually think it will win. So some
gambling site operators take people's money but don't change the published
odds. If people are being emotional, then they will bet on a losing team and
the operator doesn't have to pay out. In this case, the accuracy of the system
is less important to the operator.&lt;/p&gt;
&lt;p&gt;There is an interesting opportunity for perfectly accountable gambling systems
based on the blockchain.&lt;/p&gt;
&lt;h2&gt;A stock exchange?&lt;/h2&gt;
&lt;p&gt;Right now, I am wondering whether the current cryptocurrency exchanges look
more like the gambling systems or lotteries described above than their regulated
equivalents. What does a bad actor exchange look like? How could you tell?&lt;/p&gt;</content><category term="Products"/><category term="abuse"/><category term="cryptocurrencies"/></entry><entry><title>ABC device support and explicit release criteria</title><link href="https://www.cogini.com/blog/abc-device-support-and-explicit-release-criteria/" rel="alternate"/><published>2017-07-14T00:00:00+08:00</published><updated>2017-07-14T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2017-07-14:/blog/abc-device-support-and-explicit-release-criteria/</id><summary type="html">&lt;p&gt;One of the most important decisions we make in product development is when
to make a release. From a business perspective, it's better to release
early and often, with a "&lt;a href="/blog/its-only-a-minimum-viable-product-if-it-hurts/"&gt;minimum viable product&lt;/a&gt;".&lt;/p&gt;
&lt;p&gt;It's also important to define explicit technical quality criteria, or we will
waste a lot of resources …&lt;/p&gt;</summary><content type="html">&lt;p&gt;One of the most important decisions we make in product development is when
to make a release. From a business perspective, it's better to release
early and often, with a "&lt;a href="/blog/its-only-a-minimum-viable-product-if-it-hurts/"&gt;minimum viable product&lt;/a&gt;".&lt;/p&gt;
&lt;p&gt;It's also important to define explicit technical quality criteria, or we will
waste a lot of resources and miss our target dates. We need to focus
development and testing resources to provide the best possible balance between
quality, time to market and implementation effort.&lt;/p&gt;
&lt;p&gt;As part of this, we define an explicit set of target devices and put them in
three classes, A, B and C.  On a web project, our target devices are web
browsers, and on a mobile project they are mobile devices.&lt;/p&gt;
&lt;p&gt;Class A devices are continuously tested, e.g. on every ticket. If a feature
doesn't work on a Class A device, we don't release.&lt;/p&gt;
&lt;p&gt;Class B devices are supported in a degraded mode. They need to work, but may
have less functionality. For example, they may be be too old to support
advanced graphical effects, but all of the core functionality works.  We test
Class B devices as part of the release cycle, e.g. once every two weeks.  We
might allow a release if there is a minor problem on Class B devices, but block
the release it if there is a serious problem.&lt;/p&gt;
&lt;p&gt;Class C devices are explicitly out of the support window. They may be too old,
unusual, or too new. If it works, we are happy, but we won't put in extra
effort to support them.&lt;/p&gt;
&lt;h2&gt;Web example&lt;/h2&gt;
&lt;p&gt;On web projects, for Class A, we might support IE 10, latest Chrome
and latest Firefox. For Class B, we might support Safari on Mac and IE 8. Safari
only represents about 5% of total browsers, but is the default browser for Mac.
Similarly, IE 8 is the last version of IE that runs on Windows XP, so if we
have XP users, then we need to test on it. For Class C, we might say
that we do not support IE 6 or Microsoft Edge browsers.&lt;/p&gt;
&lt;p&gt;On some projects, mobile browsers are Class C, and other projects they are
Class A.  For example, we might support Mobile Safari on iPad even if we don't
support Safari on Mac, e.g. the "lying in bed shopping for clothes" users
sometimes represent 40% of sales for women's e-commerce sites. If we are making
a native mobile app for a project, then full support for mobile web users would be
excessive, we would detect them and display a page pointing them to the app
store.&lt;/p&gt;
&lt;h2&gt;Mobile example&lt;/h2&gt;
&lt;p&gt;Apple users typically upgrade immediately to the latest iOS version that will
run on their phone, so we normally only support the latest iOS version as Class A.
Android users typically upgrade the OS when they get a new phone, every two
years. So an OS release starts with about 5% of the market, holds there for
about six months, then at about 18 months we see a very rapid shift (see &lt;a href="https://developer.android.com/about/dashboards/index.html"&gt;the latest
Android Play stats&lt;/a&gt;).
So the latest Android OS version normally gets Class B support. The top 70%
OS versions by usage get Class A support. Older OS versions get Class B
support, and the bottom 5% gets Class C.&lt;/p&gt;
&lt;p&gt;Android tablets are typically not a primary platform, so they are Class B or C,
but if we were making a restaurant Point of Sale system, they would be Class A.&lt;/p&gt;
&lt;p&gt;Apple hardware is very consistent, so we can realistically support all the modern
iPhones. There is an &lt;a href="/blog/development-effort-of-android-vs-ios/"&gt;incredible variety of Android
hardware&lt;/a&gt;, however, with
different screen sizes, screen resolutions and hardware capabilities (e.g. CPU,
memory, graphics acceleration). We need to aggressively prioritize the hardware
devices that represent the most popular and representative phones used by
customers in the target market.&lt;/p&gt;
&lt;p&gt;Developers normally use Google Nexus phones, as Google themselves does the
Android OS port and releases regular OS updates. So that platform is
automatically tested on every release. But it's common for something to work on
the developer's phone and not work on an older, low end phone.&lt;/p&gt;
&lt;p&gt;For startups, we normally support a flagship Samsung device and a
few other popular top 10 phones. After that, we rely on partners and crash
reporting software to identify problems.&lt;/p&gt;
&lt;p&gt;We have produced apps with tens of millions of downloads. At those levels
we effectively need to support every device, but we are dealing with all kinds
of broken hardware and we need to prioritize testing efforts.&lt;/p&gt;
&lt;h2&gt;Quality criteria and risk management&lt;/h2&gt;
&lt;p&gt;We need to release and iterate in order to move forward with the business.
Having explicit release criteria allows us to hit release dates effectively,
getting to market and avoiding wasting resources and losing credibility with partners.&lt;/p&gt;
&lt;p&gt;It is always possible to improve a product, but having regular releases is
more important than having perfect releases. As we get to a final release, we
need to objectively evaluate which defects should block the release and which
we can fix later (or never).&lt;/p&gt;
&lt;p&gt;We typically do that by classifying defects by severity vs impact.&lt;/p&gt;
&lt;p&gt;In general we want Class A devices representing the majority of our
users to have no significant defects (nothing High or Medium severity). A
severe defect that affects 5% of users but has a workaround might be
acceptable. A low severity bug that affects all users might be acceptable, e.g.
a cosmetic or performance problem in certain situations.&lt;/p&gt;
&lt;p&gt;We also need to use statistical techniques to identify how many unidentified
defects may be in the product. For example, if we have done final release
testing for one week and we have seen no High severity defects, one Medium
severity defect and 10 Low severity defects, we might say that we can release
if we have no Medium severity defects at the end of the next week and a maximum
of 10 Low severity defects on Class A platforms and one Medium defect on Class
B.&lt;/p&gt;
&lt;p&gt;At a certain point we reach diminishing returns on testing and we need to
present the product to the full set of devices and use cases to see the issues.&lt;/p&gt;
&lt;p&gt;Some applications have particular high quality requirements, e.g. health care
or financial. A "small" bug dealing with money has a much bigger impact than an
ugly button. Problems with health care applications can literally cost lives or
expose us to lawsuits from leaking protected health information.&lt;/p&gt;
&lt;p&gt;In these cases we need more than quality assurance processes, we need
processes to mitigate risk, detect problems and respond to issues. Monitoring
becomes very important to identify issues in production and quickly deploy
fixes. For embedded systems, we need to be able to upgrade devices remotely.
It does us no good to fix a bug in our office and not be able fix devices
deployed on customer sites.&lt;/p&gt;
&lt;p&gt;Psychologically, it's difficult to put something out in the world that is not
perfect, but we need to do it. If we fail to have explicit release criteria, we
may delay the product launch until everything is perfect. The effect may be
that we lose credibility with partners and investors. We make the release
process so hard that we lose the ability to iterate and learn from our
customers, and, ironically, produce a worse product.&lt;/p&gt;</content><category term="Products"/><category term="testing"/><category term="development"/><category term="process"/></entry><entry><title>Anti-pattern: graphical design driven development</title><link href="https://www.cogini.com/blog/anti-pattern-graphical-design-driven-development/" rel="alternate"/><published>2017-07-14T00:00:00+08:00</published><updated>2017-07-14T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2017-07-14:/blog/anti-pattern-graphical-design-driven-development/</id><summary type="html">&lt;p&gt;Everyone wants to have a beautiful graphical design for their product. The
problem comes when graphical design becomes more important
than usability and affects the efficiency of the development process.&lt;/p&gt;
&lt;p&gt;There is an anti-pattern we call "graphical design driven development." The
way it goes is that the client starts by …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Everyone wants to have a beautiful graphical design for their product. The
problem comes when graphical design becomes more important
than usability and affects the efficiency of the development process.&lt;/p&gt;
&lt;p&gt;There is an anti-pattern we call "graphical design driven development." The
way it goes is that the client starts by creating a graphical design for
their product in Photoshop. The designer focuses on making something beautiful.
Because Photoshop can do anything, they make everything custom, e.g. custom
fonts, custom buttons, drop shadows, custom controls.  They tweak
the margins below the headings, add hairlines, make it really shine. They add
special user avatars, images and content, so each page in the mockup looks great.&lt;/p&gt;
&lt;p&gt;They make a page flow diagram, and give it to the developers to estimate.
Developers don't really see the graphical details, they just count buttons and
think of the logic, database and communications protocols.  They create an
estimate, the client approves and they get started.  They implement all the
custom things on the page exactly as the designer drew it, going through a
couple of iterations to get it pixel perfect. It looks great on iOS, but is
really hard to get the design perfect on Android because of the &lt;a href="/blog/development-effort-of-android-vs-ios/"&gt;different
screen sizes / resolutions and dynamic layouts&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It ends up taking more time than everyone expected, but finally it's ready, and
we give it to beta users to start using the app with real content.&lt;/p&gt;
&lt;p&gt;We find that the content is too big to fit on the page, or too small and looks
lonesome. Normal users don't bother with avatars, so we have a line of
"tombstones" with the default user avatar. We find that some features are hard
to use. It takes too many clicks to do common things. Some pages need to be
split up, others combined. That custom control we implemented doesn't get used,
or needs to be modified.&lt;/p&gt;
&lt;p&gt;We need a new design for the new pages. So the graphical designer comes in
again, and they spend a week or so making new beautiful designs, while the
developers wait. The developers implement all the custom details again, and we
have a round or two of tweaks. Or perhaps the graphical designer has been
working on new designs while the developers were doing the initial development.
Everyone is excited about the new things in the pipeline, and the investors are
asking when it will be done. We keep telling them that it will be there soon,
but we are losing credibility with every delay. How hard could it be to just
implement a few pages of buttons?&lt;/p&gt;
&lt;p&gt;We spent all our budget getting the app done, but at least now it's finally
released and beautiful. We can see some things that are not optimal, but
changing is so painful, we don't want to do it.&lt;/p&gt;
&lt;p&gt;There is a better way.&lt;/p&gt;
&lt;p&gt;Fundamentally, the most important thing for your product is whether it helps
your users achieve their goals. Feeling is important, and graphical design is a
big part of that, but we need a process that delivers usability first.
It may sound like I am hating on the designers, but the good ones understand this.
Having "concept" graphical designs can help us with fund raising presentations
and overall UI approach, but they can't drive the design.&lt;/p&gt;
&lt;p&gt;We start with a &lt;a href="/process/"&gt;user-focused process&lt;/a&gt; that defines &lt;a href="/blog/an-example-of-user-personas/"&gt;user
personas&lt;/a&gt; and goals, then &lt;a href="/blog/an-example-of-user-stories/"&gt;user
stories&lt;/a&gt;. Once we have a good base, we start
prototyping the application.&lt;/p&gt;
&lt;p&gt;One of the best ways is to start with &lt;a href="http://keynotopia.com/"&gt;Keynotopia&lt;/a&gt;.
They provide a set of reasonably-priced templates which you can use inside of
Keynote or PowerPoint to create your user interface. Their library of standard
mobile buttons and controls let you make realistic user flows which switch
pages by clicking on buttons or links. Entrepreneurs and product managers can
create the initial prototype without a designer, and we can go back and forth
quickly to iterate on the design. We don't have to wait on graphical design or
use specialized software like Photoshop or Illustrator.&lt;/p&gt;
&lt;p&gt;When we are happy with the prototype, we start implementing it as a real mobile
app. We first create a skeleton app which has the pages but minimal real logic.
We use &lt;a href="https://developer.apple.com/xcode/interface-builder/"&gt;Interface Builder&lt;/a&gt;
to create the UI by dragging standard controls onto the pages and connecting
them. We use mocked up static data and do as little custom UI work as we can,
focusing on how the app works &lt;em&gt;dynamically&lt;/em&gt; to handle user tasks, not what it
looks like. If we find we need to make changes to the pages, we can do it with
minimal rework. We can show the investors the prototype and they can try it
out themselves. It's clear that it's a work in progress, but it's functional and
doesn't crash. They see progress in the initial UI prototype and in
the first version of the app. We know we have gotten it right when they say
things like "I could use this today, I don't care what it looks like."&lt;/p&gt;
&lt;p&gt;Next step, the graphical designer comes in and creates a beautiful design
based on the actual app. While they are doing that, the developers implement
the application logic and communication with the back end.&lt;/p&gt;
&lt;p&gt;This way we do the prototyping and iteration work at the beginning when change is
cheaper, using presentation software instead of code. We create a development
road map which is well defined and predictable, allowing us to hit our delivery
dates. During development we create small feature tickets which can be completed
incrementally instead of a monolithic and ill defined "graphical spec" which is always
"almost done." We avoid thrashing the dev team with changes, they can sit down
and execute the bulk of the app all at once. The process is more efficient,
faster and saves money. And it ends up being a better app, beautiful &lt;em&gt;and&lt;/em&gt;
usable.&lt;/p&gt;</content><category term="Products"/><category term="graphical design"/><category term="development"/><category term="process"/></entry><entry><title>Development effort of Android vs iOS</title><link href="https://www.cogini.com/blog/development-effort-of-android-vs-ios/" rel="alternate"/><published>2017-07-14T00:00:00+08:00</published><updated>2017-07-14T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2017-07-14:/blog/development-effort-of-android-vs-ios/</id><summary type="html">&lt;p&gt;We often need to estimate development projects which have both iOS and Android.
It's tempting to say that Android will be the same, but what we have found
is that Android takes more effort.&lt;/p&gt;
&lt;p&gt;The rule of thumb in Silicon Valley is that it takes two to three times the …&lt;/p&gt;</summary><content type="html">&lt;p&gt;We often need to estimate development projects which have both iOS and Android.
It's tempting to say that Android will be the same, but what we have found
is that Android takes more effort.&lt;/p&gt;
&lt;p&gt;The rule of thumb in Silicon Valley is that it takes two to three times the
work for an app with an equivalent level of polish as a "world class" iOS app.
That may be true for some applications, but is generally excessive when you are
first starting.&lt;/p&gt;
&lt;p&gt;We generally see about 30 to 50 percent more time required to build the initial
Android app compared to iOS. Then the effort increases again when we need to
support a wider range of devices or for applications which are more dependent
on the hardware, e.g. using video or Bluetooth, or have complex user
interfaces.&lt;/p&gt;
&lt;p&gt;There are a number of reasons for this, primarily coming down to
quality of developer tools, platform fragmentation and testing effort:&lt;/p&gt;
&lt;h2&gt;1. The development tools for Android are not as productive as for iOS&lt;/h2&gt;
&lt;p&gt;Apple provides the very mature Xcode integrated development environment for
iOS, and everything works well out of the box.
Tools like &lt;a href="https://developer.apple.com/xcode/interface-builder/"&gt;Interface Builder&lt;/a&gt;
make it easy to lay out UIs graphically, the debugger and profiler are
easier to use, and the simulator compiles the app to run natively and
quickly on the dev machine.&lt;/p&gt;
&lt;p&gt;Android developers use Android Studio or general purpose IDEs like Eclipse.
They have to set up their environment from multiple pieces and plugins.
Build performance is much slower than on iOS (minutes instead of seconds),
making iteration slow. The app runs ARM CPU code in an emulator, which is
slower than the actual hardware.&lt;/p&gt;
&lt;h2&gt;2. Java is a lower-level language than Objective-C or Swift&lt;/h2&gt;
&lt;p&gt;Objective-C and the new Swift programming language are higher level,
making coding more efficient. Android apps typically have 40% more code
than iOS apps.&lt;/p&gt;
&lt;p&gt;Apple has also made some fundamental design decisions which give a better
customer experience. A common programming mistake in both platforms is a
null pointer exception.  On iOS, this results in an empty result object. On
Android, it results in a crash, which gives a poor user experience and
perception of quality. For the developer, it wastes time as they need to
restart the app and go back to where they were working whenever a crash
occurs.&lt;/p&gt;
&lt;h2&gt;3. Android APIs are lower level and less complete than for iOS&lt;/h2&gt;
&lt;p&gt;Android often requires a 3rd party library where iOS has functions in
the standard framework.  This makes it harder to use multiple libraries
together and requires the developer to spend time finding and evaluating
library options.&lt;/p&gt;
&lt;p&gt;High level widgets and consistent UI make prototyping easier on iOS.
For example, the CoreData API on iOS makes it easy for developers to work
with data "objects" and store them without considering the low level details.
Android developers need to use raw SQL with databases or other 3rd party
frameworks.&lt;/p&gt;
&lt;p&gt;Custom screen animations are easier to implement in iOS, and have hardware
acceleration making them faster and more energy efficient.&lt;/p&gt;
&lt;h2&gt;4. The iPhone has a limited number of display aspect ratios and resolutions&lt;/h2&gt;
&lt;p&gt;For iPhone, it is reasonable to design for specific screen sizes, making
"pixel perfect" UI mockups in Photoshop and implementing them exactly in the app.
Testing on a few devices covers all the important resolutions, and it is
practical to test on &lt;em&gt;all&lt;/em&gt; the iOS devices in the market.&lt;/p&gt;
&lt;p&gt;Android has &lt;em&gt;many&lt;/em&gt; different screens, effectively every variation possible
is on the market. This &lt;a href="https://opensignal.com/reports/2014/android-fragmentation/"&gt;analysis and visualization of Android device
fragmentation&lt;/a&gt;
is good, though a bit old. Here are &lt;a href="https://developer.android.com/about/dashboards/index.html"&gt;up to date stats for Android from the
Google Play Store&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There was a joke was that Samsung didn't know what the right screen aspect
ratio was, so they made every variation and counted what people actually
bought.&lt;/p&gt;
&lt;p&gt;Because device capabilities vary widely, developers have to work
in "logical" device-independent pixels and translate into device pixels.
Text may have different thickness at different screen resolutions, causing
differences in appearance, margins and line breaks.&lt;/p&gt;
&lt;p&gt;Developers need to use dynamic layouts which change the relationship between
elements to accommodate different screen aspect ratios. Spacing between elements
is fundamentally variable. Instead of using the drag-and-drop Interface
Builder like in iOS, developers generally need to code user interfaces in XML
text files.&lt;/p&gt;
&lt;p&gt;Sometimes these dynamic layouts result in poor results. A tall and narrow
display may have too much space between vertical elements and unsightly
line breaks, and an especially wide display may have blank spaces on the
sides of the screen. It is possible to create special optimized layouts for
different screens, but that requires extra work and can only cover a subset
of the market.&lt;/p&gt;
&lt;h2&gt;5. Android hardware is highly variable and software support is buggy&lt;/h2&gt;
&lt;p&gt;iOS hardware is powerful, at the top end of the market. It is reasonable for
iOS developers to only target newer devices, as users upgrade regularly.
Apple aggressively stops supporting older hardware with newer OS releases,
pushing people to upgrade.&lt;/p&gt;
&lt;p&gt;Android has some extremely low end devices, often of poor quality and
unbalanced resources, e.g. low end tablets run phone chips combined with big
displays and may not have enough RAM to handle the screen.&lt;/p&gt;
&lt;p&gt;On iOS, each app gets the hardware to itself when it's running. On Android,
apps may be sharing resources with other apps running in the background.
The app is also limited by the OS from using all the resources that are available.&lt;/p&gt;
&lt;p&gt;The iPhone uses the GPU to accelerate the user interface, and has APIs
to support animations. This gives better performance for scrolling or fancy
UI effects with no additional work for the developer. The iPhone has
advanced features like fingerprint recognition or advanced Bluetooth, and
they generally work without issues.&lt;/p&gt;
&lt;p&gt;The normal process for Android hardware support is as follows: Google
releases a new Android version. Chip manufacturers like Qualcomm or
Allwinner add drivers to support the special features of their chips, e.g.
audio / video codecs. This takes a few weeks or months, depending on
their relationship with Google and their access to pre-release versions of
Android. Mobile device manufacturers choose chips from the manufacturer
and get a reference kernel and hardware design. They may use it as is,
or may customize it to differentiate themselves. They may have
more or less software engineering ability in house, and may not even have complete
datasheets for all the features of the chips they use.&lt;/p&gt;
&lt;p&gt;The result is that there may be bugs in hardware support which cause
strange results and crashes when using features like video or Bluetooth.
Mass market apps need to work with all of the devices in the market, so they
can't assume advanced hardware features. Developers need to design for the lowest common
denominator, provide fallbacks and do more performance optimization, which takes
more development time. Carrier and OEM preload agreements can force
&lt;a href="/blog/abc-device-support-and-explicit-release-criteria/"&gt;Class A&lt;/a&gt;
support for marginal hardware.&lt;/p&gt;
&lt;p&gt;In applications which have millions of users, we can expect to see crashes due
to broken hardware, e.g. bad RAM. Debugging these problems can be extremely
difficult, because there is not fundamentally anything wrong with the software.&lt;/p&gt;
&lt;h2&gt;6. Android has many OS variations&lt;/h2&gt;
&lt;p&gt;Most iOS users are running the latest operating system release within a week of
the release. More than 95% of users with up-to-date hardware run it immediately.
Overall more than 80% of users run the latest release.&lt;/p&gt;
&lt;p&gt;Android users rely in their device manufacturer and carrier to provide OS updates,
and most do not. They expect users to buy new devices and consider the new
OS to be an incentive to buy.&lt;/p&gt;
&lt;p&gt;The result is that less than five percent of Android users are
running the latest Android OS release months later. About 30% run the previous
release family, 30% are two releases back, 30% are three back, and less than 5%
run very old releases. New OS penetration comes as people get new phones every
two years. In developing markets, significant percentages of customers run used
phones until they die or run low end hardware that can't handle the latest OS.&lt;/p&gt;
&lt;p&gt;Manufacturers such as Samsung differentiate themselves by customizing
the user interface. This takes time and may result in unique platform
bugs or incompatibilities. Some manufacturers modify how the OS works in
fundamental ways, e.g. automatically saving a copy of every photo the user
takes in a gallery. This causes problems for health-related apps which need
to maintain privacy of patient data.&lt;/p&gt;
&lt;h2&gt;7. Apple is more secure than Android&lt;/h2&gt;
&lt;p&gt;Apple differentiates itself with security and privacy compared to Google.
Apple sells phones and Google sells advertising. Outside of niche devices like
the Pixel, Google does not control the hardware. It makes money from
services, not hardware, so it wants the most users possible no matter what
they are. Apple supports features like finger print recognition and
hardware security modules.&lt;/p&gt;
&lt;p&gt;Google does not police the app store for quality the way that Apple does,
and carrier- or 3rd-party app stores are common. Users may install pirate
apps or side-load unsigned apps. The OS is more likely to be "jailbroken".
Users may be less sophisticated and install malware, or they may customize
everything up to the point of installing custom firmware.&lt;/p&gt;
&lt;p&gt;iOS apps are restricted in what they can do. Android apps can do almost
anything, including running in the background and capturing keystrokes,
listening to the microphone, reading the user's location and sending everything
to the net.&lt;/p&gt;
&lt;p&gt;Android devices often have external storage and support plugging in USB
devices. iOS devices require permission from Apple to connect hardware.&lt;/p&gt;
&lt;p&gt;The result is that creating high security apps on Android is much harder,
as they need to operate in a hostile environment.&lt;/p&gt;
&lt;h2&gt;8. Testing effort&lt;/h2&gt;
&lt;p&gt;Creating a high quality app is already more difficult, &lt;em&gt;then&lt;/em&gt; we need to test
on a wide range of devices.&lt;/p&gt;
&lt;p&gt;Applications with many users must test on tens or hundreds of devices.
Developers must debug problems on devices that they don't have access to, so
they rely on issue reports from users or software which records crash dumps and
sends cryptic reports.&lt;/p&gt;
&lt;p&gt;This effort can significantly increase the cost of developing an application as
the user base increases. It's necessary to &lt;a href="/blog/abc-device-support-and-explicit-release-criteria/"&gt;make explicit decisions about which
devices will be tested and define release
criteria&lt;/a&gt; and triage
issues based on ther impact on the user base.&lt;/p&gt;
&lt;h2&gt;Links&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://alty.co/blog/how-to-port-an-ios-app-to-android/"&gt;How to port an iOS app to Android&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This consulting company found an &lt;a href="https://infinum.co/the-capsized-eight/android-development-is-30-percent-more-expensive-than-ios"&gt;average of 30% more time for
Android&lt;/a&gt;
on their projects when making the same apps on both platforms, with a lot of
variation&lt;/p&gt;
&lt;p&gt;Discussion of development issues:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.quora.com/How-does-Android-development-compare-to-iOS-development"&gt;https://www.quora.com/How-does-Android-development-compare-to-iOS-development&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://thenewstack.io/scoring-comparison-android-ios-development/"&gt;https://thenewstack.io/scoring-comparison-android-ios-development/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://whallalabs.com/mobile-application-development-ios-vs-android-vs-windows-phone/"&gt;http://whallalabs.com/mobile-application-development-ios-vs-android-vs-windows-phone/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://savvyapps.com/blog/how-much-does-app-cost-massive-review-pricing-budget-considerations/"&gt;https://savvyapps.com/blog/how-much-does-app-cost-massive-review-pricing-budget-considerations/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.infoworld.com/article/2920333/mobile-development/swift-vs-objective-c-10-reasons-the-future-favors-swift.html"&gt;http://www.infoworld.com/article/2920333/mobile-development/swift-vs-objective-c-10-reasons-the-future-favors-swift.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://material.io/guidelines/"&gt;https://material.io/guidelines/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content><category term="Products"/><category term="mobile"/><category term="android"/><category term="ios"/><category term="process"/><category term="development"/></entry><entry><title>An example of user personas</title><link href="https://www.cogini.com/blog/an-example-of-user-personas/" rel="alternate"/><published>2017-07-12T00:00:00+08:00</published><updated>2017-07-12T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2017-07-12:/blog/an-example-of-user-personas/</id><summary type="html">&lt;p&gt;When we create products, it's important that they help &lt;em&gt;specific&lt;/em&gt; users with
their issues, not &lt;em&gt;generic&lt;/em&gt; users. It's easy to create a list of features, all
of which sound good, but don't provide a compelling solution to a specific problem
for a specific user. Without that, people won't buy your …&lt;/p&gt;</summary><content type="html">&lt;p&gt;When we create products, it's important that they help &lt;em&gt;specific&lt;/em&gt; users with
their issues, not &lt;em&gt;generic&lt;/em&gt; users. It's easy to create a list of features, all
of which sound good, but don't provide a compelling solution to a specific problem
for a specific user. Without that, people won't buy your product. If you make one
user happy, then you can make a sale, and then expand out from there.&lt;/p&gt;
&lt;p&gt;User personas (or personae) are a way of getting into the mind of your users
and figuring out what they need. We define the users and then describe their
background in what may initially seem like excessive detail. But by defining
this background, we can get at their motivations and the emotions associated
with their decisions.&lt;/p&gt;
&lt;p&gt;There is a lot of overlap between user personas for product definition and
user personas for marketing. Once you have defined your users, you know
how to talk to them, where they hang out, and what messages will attract them
to your product.&lt;/p&gt;
&lt;h1&gt;An example&lt;/h1&gt;
&lt;p&gt;Say we are working to define a product for the accounting needs of small
businesses, similar to QuickBooks or Xero. We have identified a few
personas to help us understand people's needs, skills and motivations.&lt;/p&gt;
&lt;h2&gt;Bob&lt;/h2&gt;
&lt;p&gt;Bob is the manager of a small contracting company with four employees that
mainly does kitchen renovations. He is 45 years old, and has been running his
own business for about 15 years.&lt;/p&gt;
&lt;p&gt;He is ok with a PC. He mainly uses Excel to keep track of expenses and manage
the books for his company. He doesn't spend much time in the office, though, as
he makes his money working at customer sites.&lt;/p&gt;
&lt;p&gt;He understands the business side of invoices, and has the scars to prove it.
He has a big stack of paper on his desk.  He has unnecessary cash flow problems
because he pays for materials before getting reimbursed by his customers, but
his records are disorganized and he doesn't know what people owe him or what
they have paid. He doesn't have any formal training in accounting.&lt;/p&gt;
&lt;p&gt;He hates to do admin stuff, as it takes time away from being with his family.&lt;/p&gt;
&lt;p&gt;He was recently fined by the IRS for failing to get his taxes done on time.
This is motivating him to "get organized" and get a "real accounting system" in
place.&lt;/p&gt;
&lt;p&gt;He decided to hire his niece, Alice, part time to help organize things. He is a
bit embarrassed to let her see how bad things are.&lt;/p&gt;
&lt;h2&gt;Alice&lt;/h2&gt;
&lt;p&gt;Alice is a 17 year old senior in high school. She is a "digital native", who has
been using computers all her life, and uses her phone for hours a day. She
doesn't know much about business, and certainly not accounting. She had a class
in high school in Microsoft Office.&lt;/p&gt;
&lt;p&gt;Her mother told her she needs to help Bob. She doesn't really know what she is
supposed to do, but having a bit of money is nice. She wants to show that
she is independent and can get the job done so her mom will stop treating
her like a child.&lt;/p&gt;
&lt;h2&gt;Nancy&lt;/h2&gt;
&lt;p&gt;Nancy is Bob's bookkeeper. She is 40 years old. She used to work as an
accountant at an auto parts wholesaler, then quit to have kids. She started
doing the books for small businesses on the side about five years ago. She
charges $100/month, fixed price. She has 20 customers now. The money is OK, but
she wishes she didn't have to deal with so much chaos. She doesn't like having to chase
people to give her what she needs to do her job, it takes up as much of her
time as the actual accounting.&lt;/p&gt;
&lt;p&gt;Bob has the real pain points: he needs something to help him get organized and
improve the way his business works without spending too much time or money on it.
Ideally he would be able to see reports about who owes him money, easily create
invoices and figure out if he is making money on different projects. He needs to be
able to keep track of expenses paid for by him and by his team. He recently
got a store credit card at Home Depot so he can track expenses, but he still
needs to match them to the job. Having the ability to take pictures of
receipts from his phone to get them into the system would be nice, sometimes he
loses receipts and it causes him a lot of trouble.&lt;/p&gt;
&lt;p&gt;Alice is responsible for tracking expenses and payments. She gets the bank
statements and matches money received to outstanding invoices and money paid to
other expenses like the truck, the rent and electricity. Some bills are paid
from the bank, others via check, others via credit card. She needs to enter
paper receipts and checks received.&lt;/p&gt;
&lt;p&gt;She doesn't really know what these words mean, though. She needs the software
to walk her through the process. Bob will be there sometimes, but she needs to
be able to do things by herself if possible. She will be working after school
and on weekends when Bob may not be around. If she could do it from home
instead of his office, that would be even better.&lt;/p&gt;
&lt;p&gt;Nancy mainly wants to get her job done as quickly as possible, since she
doesn't make any more money by having it take longer. If she didn't have to
visit Bob or have to call him on the phone, so much the better. She needs to
have all the data in the system, then generate the reports for the IRS.
She doesn't need hand holding from an accounting perspective. She doesn't
want to know who Bob's customers are, they are just names on accounts to her.&lt;/p&gt;
&lt;p&gt;We could define some other people in this scenario, e.g. members of Bob's team,
his customers, his suppliers. We could also think about how the software could
help Nancy's business run better and get more business herself. For example, she
would like to have something to keep track of her customers and help her follow
up on what they should give her and other important deadlines.&lt;/p&gt;
&lt;p&gt;We can define users from some other small businesses, e.g. a restaurant with
part time staff, a retail operation that has lots of transactions, inventory
accounting, etc.&lt;/p&gt;
&lt;p&gt;Ultimately, the most interesting insight may be that Bob doesn't actually want
accounting software, he wants his accounting to be done. This is a trend that
I call "Software As A Service plus Service". We could provide a software platform
&lt;em&gt;plus&lt;/em&gt; the people to run it for our customers. This is particularly useful for
SMEs, where a key problem they have is that they don't need (and can't afford)
a full time person to handle a task like accounting, unlike a bigger company which
may have a whole accounting department.&lt;/p&gt;</content><category term="Products"/><category term="personas"/><category term="design"/><category term="process"/></entry><entry><title>An example of user stories</title><link href="https://www.cogini.com/blog/an-example-of-user-stories/" rel="alternate"/><published>2017-07-12T00:00:00+08:00</published><updated>2017-07-12T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2017-07-12:/blog/an-example-of-user-stories/</id><summary type="html">&lt;p&gt;User stories are the "Director of Operations" view of the world.
They describe step by step how the business works, and what the software needs
to do to support it. They should be done after we have defined the &lt;a href="/blog/an-example-of-user-personas/"&gt;user
personas&lt;/a&gt;, to make sure we are starting
from users and …&lt;/p&gt;</summary><content type="html">&lt;p&gt;User stories are the "Director of Operations" view of the world.
They describe step by step how the business works, and what the software needs
to do to support it. They should be done after we have defined the &lt;a href="/blog/an-example-of-user-personas/"&gt;user
personas&lt;/a&gt;, to make sure we are starting
from users and their goals.&lt;/p&gt;
&lt;h1&gt;An example&lt;/h1&gt;
&lt;p&gt;Let's say we are making a food delivery service.&lt;/p&gt;
&lt;p&gt;A customer hears about the delivery service by seeing a flier on the counter
when they are going to their favorite Mexican restaurant. They take the flier,
and the next weekend it's raining, so they decide to try the take-out service.&lt;/p&gt;
&lt;p&gt;They download the mobile app using the QR code on the flier and search for the
restaurant's profile.  They choose the food they want and make their order.  We
will assume cash on delivery at this point, though we could also use credit
cards or some other mobile payment method.&lt;/p&gt;
&lt;p&gt;The mobile app sends the order to the service website via the API, then the
server notifies the restaurant that there is an order.&lt;/p&gt;
&lt;p&gt;The cashier at the restaurant has the restaurant side of the mobile app in her
pocket. It buzzes and she takes her phone out and confirms the order.
Accepting the order schedules the driver to pick it up.&lt;/p&gt;
&lt;p&gt;The cashier writes the order down on their standard paper order sheet and
gives it to the kitchen. She adds the order number from the delivery system as
well, connecting their order and the delivery service order. We could also integrate
with their POS and kitchen management system, but that's out of scope right now.&lt;/p&gt;
&lt;p&gt;The driver comes in with a list of orders on his phone. He checks off each one
using the order number on the paper and leaves with the food in his messenger
bag.&lt;/p&gt;
&lt;p&gt;The driver's phone app gives him a suggested sequence of delivery locations. He
goes to them one by one, giving them the food and taking the money. He checks
off each delivery as he does it. The system knows where he is from the phone's
GPS and can tell the customer when he is getting close to their house. The app
can give directions to the driver using Google navigation.&lt;/p&gt;
&lt;h2&gt;What can go wrong?&lt;/h2&gt;
&lt;p&gt;There are various things that can go wrong at each step. These problems and how
we deal with them are the most important thing to think about, they can make or
break the service.&lt;/p&gt;
&lt;p&gt;For example, the restaurant might be too busy to deliver when the customer
wants. Or they might not acknowledge at all. Or they might make a mistake with
the order. Or the driver might pick up the wrong bag.&lt;/p&gt;
&lt;p&gt;We have some standard expected time frames, e.g. delivery within 45 minutes. So
if the restaurant can meet that, then the cashier can simply
acknowledge the order. If they are busy and can't do it in the standard time,
then she might propose another time, e.g. one hour, or reject the order
entirely. Then we would need to notify the customer to see if the time is
acceptable or give them an opportunity to change to a different restaurant.&lt;/p&gt;
&lt;p&gt;If we get the order wrong, the customer might accept it if we give it to them
for free along with a coupon for another order.  But if the delivery guy trips
on the stairs and drops the food, then we have a bigger problem: a hungry
customer and no food.  We might need to expedite an order from the restaurant
to take care of them. We might need a customer service phone number that they
can call, as we can't expect delivery people to have good customer support
skills. We also have a branding problem: the problems are generally going to be
the restaurant's fault, but the service may get blamed for it.&lt;/p&gt;
&lt;p&gt;We are relying on mobile phones a lot, but we need to make sure that we can
survive if they fail. We would like to be able to start working with paper in
the first implementation stage, then implement the mobile apps. We have a
source of human error if we write on order forms, so a printer in the
restaurant would be nice. But hand writing is still the backup. If the driver
drops his phone and breaks it, he could maybe work from a print of the order
stapled to the bag which includes the customer info.&lt;/p&gt;
&lt;p&gt;We need to consider how having the cashier take a phone order will make
customers standing in line feel about having to wait. This emotional
aspect is an important part of user personas.&lt;/p&gt;
&lt;h2&gt;Next steps&lt;/h2&gt;
&lt;p&gt;The next step is to figure out what we need to get started running this
business, and how software can help. If you can do it with Excel or paper
in the beginning, then you can start running a "prototype" of the business
and validating your assumptions. Other parts like a nice looking website
might be necessary from a marketing perspective.&lt;/p&gt;
&lt;p&gt;After user stories are done, we prioritize "paths" through the application,
and write detailed technical use cases, estimate them and implement them.&lt;/p&gt;</content><category term="Products"/><category term="user stories"/><category term="process"/><category term="design"/></entry><entry><title>Presentation on Elixir performance</title><link href="https://www.cogini.com/blog/presentation-on-elixir-performance/" rel="alternate"/><published>2017-06-25T00:00:00+08:00</published><updated>2017-06-25T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2017-06-25:/blog/presentation-on-elixir-performance/</id><content type="html">&lt;p&gt;Here are the slides for the &lt;a href="https://www.cogini.com/files/elixir-performance.pdf"&gt;presentation on performance tuning
Elixir&lt;/a&gt; I gave to the local Elixir
user's group.&lt;/p&gt;</content><category term="Development"/><category term="elixir"/><category term="erlang"/><category term="presentations"/><category term="performance"/></entry><entry><title>It's only a minimum viable product if it hurts</title><link href="https://www.cogini.com/blog/its-only-a-minimum-viable-product-if-it-hurts/" rel="alternate"/><published>2017-04-08T00:00:00+08:00</published><updated>2017-04-08T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2017-04-08:/blog/its-only-a-minimum-viable-product-if-it-hurts/</id><summary type="html">&lt;p&gt;One definition of a startup is "a company in search of a repeatable business
model." If we have a clear idea of what the customer needs, what they are
willing to buy, and have built the product that satisfies them, then we are no
longer a startup, we are an …&lt;/p&gt;</summary><content type="html">&lt;p&gt;One definition of a startup is "a company in search of a repeatable business
model." If we have a clear idea of what the customer needs, what they are
willing to buy, and have built the product that satisfies them, then we are no
longer a startup, we are an established business.&lt;/p&gt;
&lt;p&gt;Maybe we understand customers and the market well enough that we can sit in our
cave, build the product, then release. But it's more likely that we will
get it somewhat wrong the first time and need to change direction. The process
of building a startup is iterating as quickly and efficiently as possible to
get to product-market fit before we run out of money.&lt;/p&gt;
&lt;p&gt;The term "Minimum Viable Product" means that we define the smallest "complete"
product which can be launched and satisfy a customer. A lot of entrepreneurs
get caught in the trap of thinking that they know what they are doing, or
they pay lip service to MVP and don't do it. Instead they need to &lt;em&gt;actually&lt;/em&gt;
build the product step by step based on feedback from customers.&lt;/p&gt;
&lt;p&gt;It can feel bad to make a MVP, because you show it to your friends or customers
or investors and they say "is that all there is?". Or you get worried that the
product will not be successful, and think "in order to be successful, a product
needs to have more features than the competition", so you add more and more
features to the initial release. But then you burn all your budget going in the
wrong direction and have nothing left to make necessary changes or to market
it.&lt;/p&gt;
&lt;p&gt;One of the key things that drives the real MVP is marketing. You only get one
chance to make a first impression, and it can cost a lot of money to do a
proper marketing campaign. On the other hand, if you do a "soft launch" with
your MVP, then you can get customers without having to be big and perfect.  You
just have to meet the needs of a set of specific customers, then you can build
on in the next release. So it's a matter of setting expectations properly, then
showing progress.&lt;/p&gt;
&lt;p&gt;You may be better off reserving say 1/3 of your budget for customer development
at the beginning and marketing and sales at the end, 1/3 for initial product
development, then 1/3 for a second phase of development. This ensures that you
actually get a product to market. It can also bring in revenue which funds the
business (bootstrapping), reducing reliance on later funding rounds.&lt;/p&gt;
&lt;p&gt;In the movie business, they have a concept of a "completion bond", a form
of insurance. It makes sure that a movie can be finished when it has run out of
money. If the film is not released, then it makes no money, and all the
investors get nothing. The insurance company comes in with a hard-core
penny-pinching producer who finishes the movie and gets it into
theaters. We don't have this in startups, unfortunately.&lt;/p&gt;
&lt;p&gt;If you really take this to heart, there is a lot that you can do at the
beginning by focusing on reducing risk. For example, you can visit potential
customers and interview them about their top ten problems. You may have a good
idea, but are building a solution for problem #7 instead of #1, and all the
budget is gone before the customer gets to #7. You can get them to sign a
"letter of intent" that says the will buy the product if you build it. It is
non-binding, but makes them take it seriously and talk with other people in
their organization to get approval to sign something. That helps avoid the trap
of customers being friendly and supportive to talk with you, but not ultimately
being really serious about buying your product.&lt;/p&gt;
&lt;p&gt;The key is to build relationships with target customers.  If we can get the
customers to feel ownership in the solution, then they can become powerful
advocates, e.g. providing initial references and introductions.&lt;/p&gt;
&lt;p&gt;There is a danger that if we pay too much attention to the needs of our
initial customers that we don't build a general product, we become a
consulting company building custom solutions. But at least that makes money,
and is a better problem to have, and gives us more insights into customers which
we can turn into products.&lt;/p&gt;</content><category term="Products"/><category term="entrepreneurs"/><category term="mvp"/><category term="startups"/><category term="process"/></entry><entry><title>The Four Stages of Santa Claus for software developers</title><link href="https://www.cogini.com/blog/the-four-stages-of-santa-claus-for-software-developers/" rel="alternate"/><published>2016-09-08T00:00:00+08:00</published><updated>2016-09-08T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2016-09-08:/blog/the-four-stages-of-santa-claus-for-software-developers/</id><summary type="html">&lt;p&gt;There is a joke, "The Four Stages of Santa Claus":&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;You believe in Santa Claus&lt;/li&gt;
&lt;li&gt;You do not believe in Santa Claus&lt;/li&gt;
&lt;li&gt;You are Santa Claus&lt;/li&gt;
&lt;li&gt;You look like Santa Claus&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;There is an equivalent for software developers when it comes to requirements
documents and specs.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;In the beginning of …&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;</summary><content type="html">&lt;p&gt;There is a joke, "The Four Stages of Santa Claus":&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;You believe in Santa Claus&lt;/li&gt;
&lt;li&gt;You do not believe in Santa Claus&lt;/li&gt;
&lt;li&gt;You are Santa Claus&lt;/li&gt;
&lt;li&gt;You look like Santa Claus&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;There is an equivalent for software developers when it comes to requirements
documents and specs.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;In the beginning of your career, you only see complete specs, and you act
   like Santa Claus brings the requirements in the middle of the night and puts
   them under the tree.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You realize that someone had to write the requirements, but it's not you.
   You complain that the specs are not clear or "the customer doesn't know what
   they are doing."&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You realize that &lt;em&gt;you&lt;/em&gt; need to write the specs and clarify the requirements,
   or the project is going to have problems.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You run a software consulting company and it gives you gray hairs :-)&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In software development, we have to bridge the gap between the "requirements"
side and the "solution" side. That means we have to figure out exactly what the
goals of the system are and how we are going to satisfy them, in detail.
It's everyone's responsibility to do this, from the product manager
and project manager on the requirements side or from the tech lead and dev team
on the solution side. There is no Santa Claus, unfortunately.&lt;/p&gt;
&lt;p&gt;If you are not doing everything you can, then you are making extra work for
other people. Your tech lead or project manager can do it, but they have
plenty of other work to do. Sometimes it is a business person, and it causes
real problems because you are expecting them to do things that they can't do.
Programmers are good at logic, and sales people are good with people.
It works a lot better when you understand the business requirements and write
up a proposed solution for the business owner to approve.&lt;/p&gt;
&lt;p&gt;This is why outsourcing to cheap programmers in the developing world often has
problems. The client needs to be able to write the spec in incredible detail.
The developers can't bridge the gap because they are too junior or don't have
enough understanding of the business context.&lt;/p&gt;
&lt;p&gt;Sometimes in projects there is a situation I call "someone needs to think hard
and make some decisions." This can happen if we don't get complete requirements
up front, or we are iterating on product / market fit. It's also something to
watch out for in an agile process where we are implementing cycle by cycle:
everyone is doing something that works for this iteration but we still need to
pay attention to the big picture. It's easy to coast along without decisions
being made, wasting time and causing bad user experiences which need to be
reworked.&lt;/p&gt;
&lt;p&gt;For example, say we have a SaaS product, so we need a registration and purchase
process. The developer says, "tell me what the registration process is." The
designer says, "give me the screens and I will make it look awesome." The
entrepreneur says "I don't know, I am a business guy".&lt;/p&gt;
&lt;p&gt;The best way I have found is to start by understanding / defining your target
users with &lt;a href="/blog/an-example-of-user-personas/"&gt;user personas&lt;/a&gt;, then
determining high level user goals / pain points and their ideal user
experience. Then we define &lt;a href="/blog/an-example-of-user-stories/"&gt;user stories&lt;/a&gt;
which show how the product will help people achieve their goals. Next we go
through how the service will work step by step, identifying any problems that
may occur and how we will deal with them. I call this the "Director of
Operations" view. It's not technical, it's still on the "requirements side",
but it's sweating all the details.  With this preparation, we can
architect a technical solution which meets these requirements. If we skip
directly to the technical side, then there is a danger that we will build the
wrong thing.&lt;/p&gt;</content><category term="Development"/><category term="requirements"/><category term="process"/></entry><entry><title>Presentation on Elixir and embedded programming</title><link href="https://www.cogini.com/blog/presentation-on-elixir-and-embedded-programming/" rel="alternate"/><published>2016-07-25T00:00:00+08:00</published><updated>2016-07-25T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2016-07-25:/blog/presentation-on-elixir-and-embedded-programming/</id><content type="html">&lt;p&gt;Here are the slides for the &lt;a href="https://www.cogini.com/files/embedded-elixir.pdf"&gt;presentation on Elixir and embedded
programming&lt;/a&gt; I gave to the Elixir
user's group.&lt;/p&gt;</content><category term="Development"/><category term="elixir"/><category term="erlang"/><category term="embedded"/><category term="nerves"/><category term="presentations"/></entry><entry><title>90 percent immutable</title><link href="https://www.cogini.com/blog/90-percent-immutable/" rel="alternate"/><published>2016-06-10T00:00:00+08:00</published><updated>2016-06-10T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2016-06-10:/blog/90-percent-immutable/</id><summary type="html">&lt;p&gt;After a fair amount of debugging, I got an app running in an AWS Auto Scaling
Group (ASG), pulling its config on startup from S3 and code from Amazon CodeDeploy.
On the way I found out some annoying parts of the cloud initialization process
in AWS.&lt;/p&gt;
&lt;p&gt;The idea is that …&lt;/p&gt;</summary><content type="html">&lt;p&gt;After a fair amount of debugging, I got an app running in an AWS Auto Scaling
Group (ASG), pulling its config on startup from S3 and code from Amazon CodeDeploy.
On the way I found out some annoying parts of the cloud initialization process
in AWS.&lt;/p&gt;
&lt;p&gt;The idea is that we can build a "generic" AMI which has the application
dependencies, then when the ASG starts up an instance, it will get the latest
code and configuration.&lt;/p&gt;
&lt;p&gt;Initially, I was planning to use the instance tags to keep bootstrapping
configuration information, e.g. whether it's running in staging or production
environment. This would let the instance get the config from the right S3 bucket,
contact the right RDS instance, etc.&lt;/p&gt;
&lt;p&gt;I wrote some systemd init scripts, one of which reads the EC2 instance metadata
and tags and writes the data to files on the disk in JSON format and as a shell
include file. Another script syncs the data from S3 to the local disk, and the
third starts up the application. Getting the dependencies set up properly on
these scripts ended up being tedious and difficult to get running reliably.&lt;/p&gt;
&lt;p&gt;The fundamental problem is that it takes some time for the metadata to be
available after the machine starts. And, critically, instance tags are not
available until &lt;em&gt;after&lt;/em&gt; CodeDeploy runs.&lt;/p&gt;
&lt;p&gt;As part of the deployment process, CodeDeploy takes an instance out of the
auto-scaling group, upgrades it, tests if it's healthy, then puts it back.
When CodeDeploy launches a fresh instance, it puts it in Waiting state, then
loads the code, then enables it. But instance tags are not available in Waiting
state, they are only available when the instance is ready.  So you can call
boto and read the tags, but they won’t be there, the list will be empty. And
you can’t wait for them, because they won’t be there until you start successfully.&lt;/p&gt;
&lt;p&gt;The next problem is that it takes some time for the basic metadata to be
available.  So the startup script may run, make a HTTP request to the metadata
service and not get the data, so it needs to sleep and retry.&lt;/p&gt;
&lt;p&gt;What I ended up doing is putting the parameters in base64 JSON in the user_data
field, which &lt;em&gt;is&lt;/em&gt; available, though you may have to wait a while to get it.
And I set up hard dependencies between the startup scripts. So the metadata
service runs first, then the S3 sync script (which needs the metadata to know
which bucket to read from), then the application startup. Yay systemd.&lt;/p&gt;
&lt;p&gt;And to make startup faster, I added manual calls in each script to
call the previous script to get its dependencies if they are not there.&lt;/p&gt;
&lt;p&gt;Making this even more fun to debug is that it works fine in a stand alone
instance, but has problems in the auto-scaling group. And when the instance
fails in the ASG, the app doesn’t start up so it fails the health check, so it
gets shut down and another instance runs, over and over again. So when you are
debugging, you ssh in and then the instance is abruptly terminated, and you
wait for the next instance to be started so you can try again. And each debug
cycle takes 20 minutes as you build an AMI and deploy it. Sigh.&lt;/p&gt;
&lt;p&gt;After all that, I am tempted to use the same lifecycle hook that CodeDeploy
uses to drive Ansible i.e. an instance starts up, and it pushes a message to
SNS/SQS. A python script sits on the ops server waiting for it to happen when
it gets the message, it runs an Ansible playbook on the instance to configure
it and deploy the code.&lt;/p&gt;
&lt;p&gt;See the &lt;a href="http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/introducing-lifecycle-hooks.html"&gt;lifecycle hooks docs&lt;/a&gt; for details.&lt;/p&gt;
&lt;h2&gt;ANSIBLE ALL THE THINGS!&lt;/h2&gt;
&lt;p&gt;This is a case where the Chef "pull" model might be more convenient, though in
general I like Ansible a lot. I find that Ansible is better at creating the
instances in the first place. I like the fact that Ansible doesn't need an
agent and is basically just a list of canned tasks. The list of tasks is
comprehensive, and it's easy enough to define your own. Ansible is also natural
for the systems admins on the team.&lt;/p&gt;
&lt;p&gt;So here we are at something like 90% "immutable infrastructure". I could go all
the way and burn the deps, env-specific config and code into an AMI and launch
it from the ASG.&lt;/p&gt;
&lt;p&gt;Packer makes it quite easy to build AMIs, and I am kinda tempted at this point,
but it's still slow enough to be annoying.&lt;/p&gt;
&lt;p&gt;Adding the runtime configuration would mean putting "secrets" into the AMI. I
didn’t find a really satisfactory way of passing the vault key into Ansible.
By not satisfactory, I mean it worked, but was a bit hackish. i.e. you somehow
get the password into an env var, which gets passed into the Ansible script
which means it visible in your terminal. Or if you are paranoid, you can use
temp files which you hopefully delete. So choose your poison.&lt;/p&gt;
&lt;p&gt;Here is an example Packer file:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;variables&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;pass&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;{{env `ANSIBLE_VAULT_PASS`}}&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;builders&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;type&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;amazon-ebs&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;region&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;us-east-1&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;source_ami&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;ami-123&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;instance_type&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;t2.micro&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;ssh_username&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;centos&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;ami_name&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;foo app {{timestamp}}&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;vpc_id&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;vpc-123&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;subnet_id&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;subnet-123&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;associate_public_ip_address&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;true&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;ami_virtualization_type&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;hvm&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;communicator&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;ssh&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;ssh_pty&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;true&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;launch_block_device_mappings&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;
&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;device_name&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;/dev/sda1&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;volume_size&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;volume_type&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;gp2&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;delete_on_termination&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;provisioners&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;type&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;shell&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;inline&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;sudo yum install -y epel-release&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;sudo yum install -y ansible&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;type&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;ansible-local&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;playbook_file&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;../ansible/foo-app.yml&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;playbook_dir&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;../ansible&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;inventory_groups&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;app,tag_env_staging&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;command&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;echo &amp;#39;{{user `pass`}}&amp;#39; | ansible-playbook&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;extra_arguments&amp;quot;&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;--vault-password-file=/bin/cat --tags setup&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;CodeDeploy&lt;/h2&gt;
&lt;p&gt;I am pretty happy with CodeDeploy so far. By standardizing the deployment
process across apps, our “follow the sun” sysadmin team in Europe, Asia and
Latin America can roll back to a previous successful release without needing to
know much about the app.  So if something goes bump in the night, someone will
be able to deal with it during their day without having to get the developers
out of bed.&lt;/p&gt;
&lt;h2&gt;UPDATE&lt;/h2&gt;
&lt;p&gt;We are now using Terraform to provision instances, with Ansible to set them up.
We build an AMI for each environment (staging, production) using Packer, with
the config it needs baked in, then deploy the app using CodeDeploy. This avoids
the problems discussed here.&lt;/p&gt;</content><category term="DevOps"/><category term="aws"/><category term="ansible"/><category term="packer"/><category term="codedeploy"/></entry><entry><title>Presentation on thinking functionally in Elixir</title><link href="https://www.cogini.com/blog/presentation-on-thinking-functionally-in-elixir/" rel="alternate"/><published>2016-05-25T00:00:00+08:00</published><updated>2016-05-25T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2016-05-25:/blog/presentation-on-thinking-functionally-in-elixir/</id><content type="html">&lt;p&gt;Here are the slides for the &lt;a href="https://www.cogini.com/files/elixir-thinking-functionally.pdf"&gt;presentation on thinking functionally in
Elixir&lt;/a&gt; I gave to the local Elixir
user's group.&lt;/p&gt;</content><category term="Development"/><category term="elixir"/><category term="erlang"/><category term="presentations"/><category term="functional programming"/></entry><entry><title>Presentation on Erlang and functional programming</title><link href="https://www.cogini.com/blog/presentation-on-erlang-and-functional-programming/" rel="alternate"/><published>2016-05-16T00:00:00+08:00</published><updated>2016-05-16T00:00:00+08:00</updated><author><name>Jake Morrison</name></author><id>tag:www.cogini.com,2016-05-16:/blog/presentation-on-erlang-and-functional-programming/</id><content type="html">&lt;p&gt;Here are the slides for the &lt;a href="https://www.cogini.com/files/erlang-practical-functional.pdf"&gt;presentation on Erlang and functional
programming&lt;/a&gt; I gave to the local
Elixir user's group.&lt;/p&gt;</content><category term="Development"/><category term="elixir"/><category term="erlang"/><category term="presentations"/><category term="functional programming"/></entry></feed>