All posts by Juan Lebrijo

Mount an S3 bucket as local folder

I am based on this previous idea from Anthony Heddings.

First of all install s3fs package in your Ubuntu:

sudo apt install s3fs

After that you have to go to your AWS console to create a bucket and a user with read/write access permissions to S3. For that purpose this video from Tech Arkit is really useful.

This user should give you a key and secret to access your buckets, that you have to config in you system:

touch /etc/passwd-s3fs
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > /etc/passwd-s3fs
chmod 600 /etc/passwd-s3fs

Then you can mount your bucket:

mkdir /mnt/bucket-name
s3fs bucket-name /mnt/bucket-name

Usually it takes hours to Amazon to propagate the changes, so immediatelly you can experiment problems. Meanwhile you can use this debug command:

s3fs bucket-name /mnt/bucket-name -o dbglevel=info -f -o curldbg

If you want this to mount at boot, you’ll need to add the following to your /etc/fstab:

s3fs#bucket-name /mnt/bucket-name fuse _netdev,allow_other,umask=227,uid=33,gid=33,use_cache=/root/cache 0 0

Configuration Management

We create Ruby on Rails Web Applications, mainly in several environments, so that is really important not only the automated Deployment for us, but the automated installation (Configuration) also. For Server side, we use dedicated Ubuntu LTS Systems, on the usual providers: AWS, DO, Linode …

We use Capistrano for automated deployments, and we searched years ago for a tool for Configuration Management. We were evaluating all market possibilities: Chef, Puppet, Salt, Ansible …. And finally we decided Chef for a reason: We know ruby so writing recipes in ruby was a big advantage.

But working with Chef Solo we found several limitations:

  • Need to install Chef Agent in node, which makes the process heavy.
  • You need to run all tasks on every setup command, it delays debugging tasks.
  • Debugging errors are a hell, there is not a clear help for errors. Indeed the documentation is sometimes poor.

Probably regarding that situation Ansible server agent-less promise would be the best option. But I found several inconveniences:

  • Language based on YAML, really verbose (try to install a big list of packages)
  • We would lose ruby for writing recipes.
  • Working with environments/roles is not really standardized.

What do we really need?

  • Write recipes (scripts) in Ruby, in our well-known and loved language.
  • Agent-less server side. Solve everything with an ssh connection.
  • Work with variables.
  • Work with environment/roles variables and configurations.
  • Work with templates for configuration files.

Chef, Ansible, Salt … Are really powerful, but:

  • Do we need OS compatibility? No, we use Ubuntu LTS versions.
  • Do we need complicated idioms? No, we have ruby and Shell scripts.
  • Do we need Warehouses or Galaxies of recipes? well … yes, they are useful, but sometimes they are solved with a shell script.

Sometimes, the solution is the easiest way. Shell scripting ? near, but what about ruby, variables or templates …. Capistrano 3:

  • Based on Ruby rake tasks.
  • Only SSH connections, based on SSHkit.
  • Working with roles, environments and variables out of the box.
  • We know the tool because our deployments are based on this.
  • We will integrate the standard recipes into our prun-ops gem.

Yes, we need templates, but we have ERB in Ruby, and googling a while we can find an easy solution for templating.

Today our analysis gives Capistrano 3 as the best way to go, tomorrow … who knows?

DoS attacks Against WordPress XMLRPC

WordPress is the most popular Blog system. But it has a weakness on its design: XML-RPC protocol.

Brute Force Amplification Attacks Against WordPress XMLRPC

This protocol was made to transmit pings and references between blogs, sending/accepting automatic messages between blogs.

I tried several solutions like Manage XML-RPC plugin, but obviously, when you are being attacked, you cannot access to the Dashboard to configure that plugin correctly. Here there are some other suggestions.

I will show you how I proceed to reject the attack.

First, logging the problem: `tail -f /var/log/nginx/access.log`. Then you can see the annoying IP making continuous /xmlrpc.php calls:

163.172.141.185 - - [28/Nov/2016:09:19:11 +0000] "POST /xmlrpc.php HTTP/1.0" 403 177 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; http://www.google.com/bot.html)"
163.172.141.185 - - [28/Nov/2016:09:19:11 +0000] "POST /xmlrpc.php HTTP/1.0" 403 177 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; http://www.google.com/bot.html)"
163.172.141.185 - - [28/Nov/2016:09:19:11 +0000] "POST /xmlrpc.php HTTP/1.0" 403 177 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; http://www.google.com/bot.html)"
163.172.141.185 - - [28/Nov/2016:09:19:11 +0000] "POST /xmlrpc.php HTTP/1.0" 403 177 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; http://www.google.com/bot.html)"

Second, directly deny the IP in your NGINX config (ie: /etc/nginx/conf.d/base.conf): `deny 163.172.141.185;`.

Nginx just responds with `403 HTTP Forbidden` to any request from this IP.