WordPress: 301 moved permanently via IP but loads with localhost

If you are getting this error but you can curl with localhost you can add nginx to proxy_pass your apache.

Install nginx:
yum install epel-release

yum install nginx

Edit /etc/nginx/nginx.conf
Replace port 80 with 8080

Create a file: /etc/nginx/conf.d/wp.conf with following contents:

server {
        listen       8081;
        server_name  _;

location @wp {
      proxy_pass         http://localhost:80;
 
  }


 location / {
     try_files $uri @wp;
    }
}

Restart nginx.

Your wordpress site will be available on <IP>:8081

WordPress configure dynamic url

Edit wp-config.php and add the following lines:
define(‘WP_HOME’, ‘/’);
define(‘WP_SITEURL’, ‘/’);

Change the values in database as:
update wp_options set option_value=”/” where option_name=”siteurl”

update wp_options set option_value=”/” where option_name=”home”

Now your siteurl IP/DNS will not be hardcoded.

Mysql add user with all privileges

Execute the following commands:

CREATE USER wordpressuser@’%’ IDENTIFIED BY ‘WordPress123!’;
GRANT ALL PRIVILEGES ON *.* TO ‘wordpressuser’@’%’ WITH GRANT OPTION;
FLUSH PRIVILEGES;

Is it "okay" to host a small wordpress blog on one AWS EC2 Instance without load balancers/beanstalk?

This is a very simple question for those with the knowledge, but I’m a newbie.

In essence, I just need to know if it would be considered okay to run a small, approx. 700 visitors/day bitnami wordpress blog on just one t2.medium EC2 instance (without any auto-scaling, beanstalk).

Am at risk of it crashing? What stats should I monitor or be aware of to be aware of potential dangers? Sorry for the basic nature of these questions, but this is new.

Solution:

tl;dr: It might be “okay”, but it’s not ideal.

If your question is because of:

  • Initial setup time – Load-balancing and auto-scaling will be less expensive (more time-efficient) over time.
  • Cost – Auto-scaling spins down instances that aren’t being used to reduce cost.
  • Minimal setup for a great user experience – The goal of a great AWS setup is to ensure that capacity matches demand

Am at risk of it crashing?

Possibly, yes. If you average 700 visitors, then the risk is traffic spikes if all visitors hit at the same. It also depends on what your maximum visitors are, which could vary widely from the average (or not)

What stats should I monitor or be aware of to be aware of potential dangers?

  • Monitor the usage on high traffic days (ie. public holiday sales)
  • Setup billing alerts
  • Setup the right metrics:

See John Rotenstein’s SO answer:

CPU Utilization is not always the right measure to use — your
application might only be able to handle a limited number of
connections, it might be squeezed on RAM and the types of requests
might vary too.

You can use normal monitoring tools, or you can write something that
pushes metrics to Amazon CloudWatch, so that you go beyond the basic
CPU and Network metrics that CloudWatch normally provides. You could
even use the Load Balancer’s Latency metric to trigger scaling when
the application slows down (custom code required).

I’d start with:

  • Two or more instances – to deal with instance redundancy (an instance going down)
  • Several t2.small rather than one t2.medium can work out to be more cost-efficient, and more cost efficient than EC in some use cases.
  • Add auto-scaling – automatically spin up or down instances based on minimum and maximum counts
  • Load balancing – to re-route users from unhealthy to healthy instances. And also to keep all of the spun up instances all working as evenly as possible (rather than a single instance handling 80% of the workload while the others bludge).

You can always reduce your instances after time with monitoring.