John Pol

3 Steps to Setup auto renew for Let’s Encrypt SSL Certificates(Nginx)

In this  tutorial, I will show you how to renew your Let’s Encrypt certificates automatically on Nginx server. After this You don’t need to renew your letsencrypt SSL certificates manually. If you’re using an Apache Server then follow this tutorial here.

How to auto renew Let’s encrypt Certificates ?

Letsencrypt auto renewal set up process is a very easy and simple one, you just need to set up a cron job to automatically renew your certificates. But if I only show you the cron job part, later you may face some problems. That is why in this tutorial, you will find 3 steps which let you to set up a perfect auto renewal for your Let’s Encrypt SSL Certificate. So, Let’s get started

Prerequisites

  • Before getting started with this tutorial, you should have Installed Let’s Encrypt SSL certificates on your Nginx server. If you not done that yet then follow this tutorial here(I am working on it).
  • Running  any Linux system with sudo or root privileges shell access.

There are lot of ACME client available for letsencrypt certificate installation but for simplicity we used Certbot.

Let’s get started,

Step 1: Determining Certbot type

It is very important to find out which type of Certbot you are using  at the time of  Letsencrypt SSL certificate installation. This is necessary because later you have to put different command in cron script, base on Certbot type.  There are two ways to install Certbot on a Linux server.

a) Using wget https://dl.eff.org/certbot-auto  :  As per certbot officials website this method is use for Debian 8, CentOS 6, RHEL 6, Ubuntu (>>16.04) version of Linux. If you used wget method  then you already have  a ‘certbot-auto’ directory in your server. Later we’ll need the location of that directory. Generally people download the certbot-auto in /usr/local/bin/certbot-auto directory. You may have downloaded it at /etc/letsencrypt or somewhere else.

If you forgot or delete it then execute the following lines from  SSH shell terminal.

wget https://dl.eff.org/certbot-auto
sudo mv certbot-auto /etc/letsencrypt/certbot-auto
sudo chown root /etc/letsencrypt/certbot-auto
sudo chmod 0755 /etc/letsencrypt/certbot-auto

b) Installed from Linux repository : If you installed certbot using command like this

#For CentOS 7 or up
sudo yum install certbot python2-certbot-nginx
#For Debian 9 or up and Ubuntu 16.04 up
sudo apt-get install certbot python-certbot-nginx

Then it means that you installed certbot from Linux repository.

Step 2 : Renewing Letsencrypt certificates Automatically

In this step we will setup letsencrypt auto renew using Cron.

The cron is a software utility, offered by Linux-like operating system which automates the scheduled task at a predetermined time. It is a daemon process, which runs as a background process and performs the specified operations at the predefined time when a certain event or condition is triggered without the intervention of a user.

We are going to setup a scheduled task to execute certbot renew command at every weekend.

Certbot renew command attempts to renew any previously-obtained certificates that expire in less than 30 days. The same plugin and options that were used at the time the certificate was originally issued will be used for the renewal attempt. Since renew only renews certificates that are near expiry it can be run as frequently as you want – since it will usually take no action.

So, let’s  open crontab by executing following line on your Linux terminal.

sudo crontab -e

We have to use Root user crontab that is why we’re using sudo command. Only root user have the permission to execute certbot renew command. If you are wondering what is crontab then let me explain it to you, Crontab (cron table) is a just list of cron jobs which you may want to run on a schedule time.

You may be asked to select an editor. Select Nano or  /bin/nano if it’s available by typing its number and pressing Enter. Vi and other more advanced editors may be preferred by advanced users, but Nano is an easy editor to get started with.

Use the arrow keys or the page down key to scroll to the bottom of the crontab file in Nano. The lines starting with # are comment lines, which means that cron ignores them.

Now paste or type the following line according to your certbot type and Linux version. This code will run on every weekend at 3.00 am. When your letsencrypt SSL certificate left less than  30 days, the certbot renew command will renew you letsencrypt cert automatically.

For those, who use wget https://dl.eff.org/certbot-auto 

For, Debian Linux version 7.x or Ubuntu Linux version Ubuntu 14.10 or older:

* 3 * * 6 cd /path/location && ./certbot-auto renew && /etc/init.d/nginx restart

For, Debian Linux version 8.x+ or Ubuntu Linux version Ubuntu 15.04+ or above:

* 3 * * 6 cd /path/location && ./certbot-auto renew && systemctl restart nginx

For, CentOS/RHEL (Red Hat) Linux version 4.x/5.x/6.x or older specific commands

* 3 * * 6 cd /path/location && ./certbot-auto renew && service nginx restart

For, CentOS/RHEL (Red Hat) Linux version 7.x or newer specific commands

* 3 * * 6 cd /path/location && ./certbot-auto renew && systemctl restart nginx

For those, who Installed Certbot from Linux repository :

For, Debian Linux version 7.x or Ubuntu Linux version Ubuntu 14.10 or older:

* 3 * * 6 certbot renew && /etc/init.d/nginx restart

For, Debian Linux version 8.x+ or Ubuntu Linux version Ubuntu 15.04+ or above:

* 3 * * 6 certbot renew && systemctl restart nginx

For, CentOS/RHEL (Red Hat) Linux version 4.x/5.x/6.x or older specific commands

* 3 * * 6 certbot renew && service nginx restart

For, CentOS/RHEL (Red Hat) Linux version 7.x or newer specific commands

* 3 * * 6 cd certbot renew && systemctl restart nginx

Saving the File

Now, Press Ctrl-O and press Enter to save the crontab file in Nano. Use the Ctrl-X shortcut to close Nano after you’ve saved the file.

Step 3: Auto Renew Testing:

Though this part is optional but I recommand you to test your auto-renew cron script for errors. It will be a disaster if your Letsencrypt Certificate does not renew before expire, due to some error. 

Basic Testing using --dry-run:

For error checking we’ll  perform certbot renew --dry-run or path/location/certbot-auto renew --dry-run ——- a process in which the auto-renew script will be executed without actually renewing the certificates.

Execute the following lines on your Linux terminal,

For those, who use wget https://dl.eff.org/certbot-auto 

sudo -i 
cd /path/location && ./certbot-auto renew --dry-run && nginx-restart-command

For those, who Installed Certbot from Linux repository :

sudo -i 
certbot renew --dry-run && nginx-restart-command

Advance testing using --force-renew

In this advance testing section we’ll simulate the letsencrypt  auto certificate renewal process by using –force-renew command. As you already know that the certbot renew command only take action if your certificate has less than 30 days. But if we use it with “–force-renew” command then your certificate get renewed immediately. Remember that, you only can renew 5 certificates per week for a particular domain or subdomain.

1. Note the date of your current certificate

To view the current expire date of your let’s encrypt certificate, execute the following command on your terminal.

sudo openssl x509 -noout -dates -in /etc/letsencrypt/live/your-domain-name/fullchain.pem

Take note of the date and time when the certificate was issued – either paste it into notepad or write it down on a piece of paper.

2. Creating A Cron job

In this step we’ll create a cron job which will get executed after 6 minutes.

Execute the “date” command to know the current time  of your Linux server.

In this example my Linux server  time  showed 17:38:05. So, Let’s create a cron job at 17:44 (17:38 plus 6 minutes).

44 17 * * * cd /etc/letsencrypt/ && ./certbot-auto renew --force-renew && /etc/init.d/nginx restart

Don’t forget to change the time and Nginx restart command(as per as your Linux version).

3. Syslog log Checking

After the time at the front of the script has passed (17:44 in this example), check your system log to verify that the script has executed successfully.

To view the system log execute this command,

cat /var/log/syslog

If  the cron script appear in syslog then follow the next step, if not then wait few minutes and reopen the syslog.

4. Check if renewal was successful

Now, Lets again check the let’s encrypt certificate’s expire date,

sudo openssl x509 -noout -dates -in /etc/letsencrypt/live/your-domain-name/fullchain.pem

Now, compare the noted expiry date with the current expire date, if you are seeing any changes then you don’t have any error in your auto renewal script. If not then feel free to drop a comment in the below comment section.

Lastly don’t forgot to revert crontab script to default.

Now it is your time!

I tried my best to provide you a complete tutorial on how to renew your letsencrypt SSL Certificate automatically on a nginx server. I hope you liked it.

If you need help just drop a comment.

If you benefited from this tutorial, and would like to support my work, please like my Facebook page.

Thanks,

3 Steps to Setup auto renew for Let’s Encrypt SSL Certificates(Apache)

In this  tutorial, I will show you how to renew your Let’s Encrypt certificates automatically on Apache  server. After this You don’t need to renew your letsencrypt SSL certificates manually. If you’re using a Nginx Server then follow this tutorial here.

How to auto renew Let’s encrypt Certificates ?

Letsencrypt auto renewal set up process is a very easy and simple one, you just need to set up a cron job to automatically renew your certificates. But if I only show you the cron job part, later you may face some problems. That is why in this tutorial, you will find 3 steps which let you to set up a perfect auto renewal for your Let’s Encrypt SSL Certificate. So, Let’s get started

Want Exclusive Tutorials?

Prerequisites

  • Before getting started with this tutorial, you should have Installed Let’s Encrypt SSL certificates on your Apache or Nginx server. If you not done that yet then follow this tutorial here(I am working on it).
  • Running  any Linux system with sudo or root privileges shell access.

There are lot of ACME client available for letsencrypt certificate installation but for simplicity we used Certbot.

Let’s get started,

Step 1: Determining Certbot type

It is very important to find out which type of Certbot you are using  at the time of  Letsencrypt SSL certificate installation. This is necessary because later you have to put different command in cron script, base on Certbot type.  There are two ways to install Certbot on a Linux server.

a) Using wget https://dl.eff.org/certbot-auto  :  As per certbot officials website this method is use for Debian 8, CentOS 6, RHEL 6, Ubuntu (>>16.04) version of Linux. If you used wget method  then you already have  a ‘certbot-auto’ directory in your server. Later we’ll need the location of that directory. Generally people download the certbot-auto in /usr/local/bin/certbot-auto directory. You may have downloaded it at /etc/letsencrypt or somewhere else.

If you forgot or delete it then execute the following lines from  SSH shell terminal.

wget https://dl.eff.org/certbot-auto
sudo mv certbot-auto /etc/letsencrypt/certbot-auto
sudo chown root /etc/letsencrypt/certbot-auto
sudo chmod 0755 /etc/letsencrypt/certbot-auto

b) Installed from Linux repository : If you installed certbot using command like this

#For CentOS 7 or up
sudo yum install certbot python2-certbot-apache
#For Debian 9 or up and Ubuntu 16.04 up
sudo apt-get install certbot python-certbot-apache

Then it means that you installed certbot from Linux repository.

 

Step 2 : Renewing Letsencrypt certificates Automatically

In this step we will setup letsencrypt auto renew using Cron.The cron is a software utility, offered by Linux-like operating system which automates the scheduled task at a predetermined time. It is a daemon process, which runs as a background process and performs the specified operations at the predefined time when a certain event or condition is triggered without the intervention of a user.We are going to setup a scheduled task to execute certbot renew command at every weekend.Certbot renew command attempts to renew any previously-obtained certificates that expire in less than 30 days. The same plugin and options that were used at the time the certificate was originally issued will be used for the renewal attempt. Since renew only renews certificates that are near expiry it can be run as frequently as you want – since it will usually take no action.So, let’s  open crontab by executing following line on your Linux terminal.
sudo crontab -e
We have to use Root user crontab that is why we’re using sudo command. Only root user have the permission to execute certbot renew command. If you are wondering what is crontab then let me explain it to you, Crontab (cron table) is a just list of cron jobs which you may want to run on a schedule time.

You may be asked to select an editor. Select Nano or  /bin/nano if it’s available by typing its number and pressing Enter. Vi and other more advanced editors may be preferred by advanced users, but Nano is an easy editor to get started with.

Use the arrow keys or the page down key to scroll to the bottom of the crontab file in Nano. The lines starting with # are comment lines, which means that cron ignores them.Now paste or type the following line according to your certbot type and Linux version. This code will run on every weekend at 3.00 am. When your letsencrypt SSL certificate left less than  30 days, the certbot renew command will renew you letsencrypt cert automatically.

For those, who use wget https://dl.eff.org/certbot-auto 

For, Debian Linux version 7.x or Ubuntu Linux version Ubuntu 14.10 or older:
* 3 * * 6 cd /path/location && ./certbot-auto renew && /etc/init.d/apache2 restart
For, Debian Linux version 8.x+ or Ubuntu Linux version Ubuntu 15.04+ or above:
* 3 * * 6 cd /path/location && ./certbot-auto renew && systemctl restart apache2.service
For, CentOS/RHEL (Red Hat) Linux version 4.x/5.x/6.x or older specific commands
* 3 * * 6 cd /path/location && ./certbot-auto renew && service httpd restart
For, CentOS/RHEL (Red Hat) Linux version 7.x or newer specific commands
* 3 * * 6 cd /path/location && ./certbot-auto renew && systemctl restart httpd.service

For those, who Installed Certbot from Linux repository :

For, Debian Linux version 7.x or Ubuntu Linux version Ubuntu 14.10 or older:

* 3 * * 6 certbot renew && /etc/init.d/apache2 restart

For, Debian Linux version 8.x+ or Ubuntu Linux version Ubuntu 15.04+ or above:

* 3 * * 6 certbot renew && systemctl restart apache2.service

For, CentOS/RHEL (Red Hat) Linux version 4.x/5.x/6.x or older specific commands

* 3 * * 6 certbot renew && service httpd restart

For, CentOS/RHEL (Red Hat) Linux version 7.x or newer specific commands

* 3 * * 6 cd certbot renew && systemctl restart httpd.service

Saving the File

Now, Press Ctrl-O and press Enter to save the crontab file in Nano. Use the Ctrl-X shortcut to close Nano after you’ve saved the file.

Step 3: Letsencrypt Auto Renew Testing:

Though this part is optional but I recommand you to test your auto-renew cron script for errors. It will be a disaster if your Letsencrypt Certificate does not renew before expire due to some error. 

Basic Testing using --dry-run:

For error checking we’ll  perform certbot renew --dry-run or path/location/certbot-auto renew --dry-run ——- a process in which the auto-renew script will be executed without actually renewing the certificates.

Execute the following lines on your Linux terminal,

For those, who use wget https://dl.eff.org/certbot-auto 

sudo -i 
cd /path/location && ./certbot-auto renew --dry-run && apache-restart-command

For those, who Installed Certbot from Linux repository :

sudo -i 
certbot renew --dry-run && apache-restart-command

Advance testing using --force-renew

In this advance testing section we’ll  simulate the letsencrypt  auto certificate renewal process by using –force-renew command. As you already know that the certbot renew command only take action if your certificate has less than 30 days. But if we use it with “–force-renew” command then your certificate get renewed immediately. Remember that, you only can renew 5 certificates per week for a particular domain or subdomain.

1. Note the date of your current certificate

To view the current expire date of your let’s encrypt certificate, execute the following command on your terminal.

sudo openssl x509 -noout -dates -in /etc/letsencrypt/live/your-domain-name/fullchain.pem

Take note of the date and time when the certificate was issued – either paste it into notepad or write it down on a piece of paper.

2. Creating A Cron job

In this step we’ll create a cron job which will get executed after 6 minutes.

Execute the “date” command to know the current time  of your Linux server.

In this example my Linux server  time  showed 17:38:05. So, Let’s create a cron job at 17:44 (17:38 plus 6 minutes).

44 17 * * * cd /etc/letsencrypt/ && ./certbot-auto renew --force-renew && /etc/init.d/apache2 restart

Don’t forget to change the time and Apache restart command(as per as your Linux version).

3. Syslog log Checking

After the time at the front of the script has passed (17:44 in this example), check your system log to verify that the script has executed successfully.

To view the system log execute this command,

cat /var/log/syslog

If  the cron script appear in syslog then follow the next step, if not then wait few minutes and reopen the syslog.

4. Check if renewal was successful

Now, Lets again check the let’s encrypt certificate’s expire date,

sudo openssl x509 -noout -dates -in /etc/letsencrypt/live/your-domain-name/fullchain.pem

Now, compare the noted expiry date with the current expire date, if you are seeing any changes then you don’t have any error in your auto renewal script. If not then feel free to drop a comment in the below comment section.

Lastly don’t forgot to revert crontab script to default. 

Now it is your time!

I tried my best to provide you a complete tutorial on how to renew your letsencrypt SSL Certificate automatically. I hope you liked it.

If you need help just drop a comment.

If you benefited from this tutorial, and would like to support my work, please like my Facebook page.

Thanks,

Set up an FTP Server on Google Cloud Platform

Want to set up an FTP server on Google Cloud Platform then don’t worry, I am going to show you how to do it.

But before that just let me explain some stuff.

FTP (File Transfer Protocol) is a standard network protocol used to transfer files to and from a remote network. You need an FTP server and minimum an FTP client, To establish an FTP connection.

In this tutorial, we will set up an FTP server on Google Cloud using VSFTPD (Very Secure FTP Daemon). For FTP client, we’re using Filezilla client on our desktop.

Is FTP secured?

No, The secured version of FTP is FTP/S or FTP (File Transfer Protocol over Secure Sockets Layers). FTPS is FTP with SSL for security. As it uses SSL, it requires a certificate.

Let’s get started,

Want Exclusive Tutorials?

Step 1: Deploy a Virtual Instance on Google Cloud

To create a Linux FTP server on google cloud you have to launch a Linux VM, If you already deployed one, that also work just fine. Skip this step if you already deployed your Virtual Machine.

On your Google Cloud  dashboard and click the hamburger menu in the upper left-hand corner of the screen.

Now hover over Compute Engine and Click on VM Instances.

After that click the Create button to deploy a new VM.

Now, choose your New VM’s  Machine type, server location etc. as per your requirement.

In the above image I am showing you my VM’s specification, there I am using f1-micro with debian/linux 9.

After that, Click the create button to deploy your VM.

Step 2: Open SSH terminal

After you have successfully deployed your VM, click the SSH button to lunch the command terminal.

This is how SSH command terminal looks. Now follow the step 3.

Step 3: Installing VSFTPD

By default, Google cloud  Linux does not come with FTP server application, that is why we’re going to install vsftpd daemon. Let’s update our package list before vsftpd installation.

sudo apt-get update 

sudo apt-get install vsftpd

After Installation, Create a backup file of vsftpd.conf. 

sudo cp /etc/vsftpd.conf /etc/vsftpd.conf.back

With a backup of the configuration in place, we’re ready to configure vsftpd.

Step 4: Create a User

After you have opened the ssh terminal, We’ll create a new Linux User by executing the below command. You also can use your existing user.

sudo adduser tom

Step 5: Configure vsftpd.conf file

There are multiple ways which you can set up your Vsftpd FTP server. In this step, We’re planning to allow a single user with a local shell account to connect with FTP. But if you want secure connection then follow 1 to 7 steps. And If you want to create a ftp server which is open for all then follow 1 to 6 then 8.

So, lets set up vsftpd.conf file,

sudo nano /etc/vsftpd.conf

Now, verify that the settings in your configuration match those below.

# Allow anonymous FTP? (Disabled by default).
anonymous_enable=NO
#
# Uncomment this to allow local users to log in.
local_enable=YES .........

After that, Uncommment the write_enable setting. This will allow user to upload files.

....

write_enable=YES

Now, We’ll also uncomment the chroot to prevent the FTP-connected user from accessing any files or commands outside the directory tree.

chroot_local_user=YES

Next, add the two line below, the first setting will insert the username in our local_root directory path. And the second will define our ftp user default directory.

user_sub_token=$USER
local_root=/home/$USER/ftp

After that limit the range of port that can be used for passive FTP.

pasv_min_port=40000
pasv_max_port=50000

This step is Optional, If you use userlist_enable, then only the list user are allowed to use FTP, and the other Linux user who are not in that list are denied FTP access.

Add the below line to enable user list.

userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO

userlist_deny toggles the logic. When it is set to “YES”, users on the list are denied FTP access. When it is set to “NO”, only users on the list are allowed access.

Now add user to the userlist by executing this below command.

echo “tom” | sudo tee -a /etc/vsftpd.userlist

You can double-cheak that be the command.

cat /etc/vsftpd.userlist

Save and restart vsftpd:

NOW, save the file by pressing ctrl + x then y, enter.

Now, we need to restart the server for the changes to take effect:

sudo systemctl restart vsftpd

Step 6: Preparing an FTP Directory

You can create more secure FTP by restricted users to a specific directory. We already done that by uncommented “chroot_local_user=YES” settings line. vsftpd Accomplishes this with chroot jails.

Because of the way vsftpd secures the directory, user can not write or upload anything to that directory. To, solve this problem we’re will create a ftp directory to serve as the chroot and a writeable files directory to hold the actual files.

Now, execute the following commands.

Execute this command to create a new directory

sudo mkdir /home/tom

sudo mkdir /home/tom/upload

Now remove write permissions with the following commands:

sudo chown nobody:nogroup /home/tom

sudo chmod a-w /home/tom

Let’s make the upload writeable.

sudo chmod tom:tom /home/tom/upload

Save and restart vsftpd:

NOW, save the file by pressing ctrl + x then y, enter.

Now, we need to restart the server for the changes to take effect:

sudo systemctl restart vsftpd

Step 7: FTP/S or FTP over SSL setup (optional)

Generally FTP does not encrypt any data in transit. It means your data and credentials can be read by someone else. To provide that encryption we will enable TTL/SSL.

Before that let’s create an SSL certificate using OpenSSL. All google cloud Linux VMs come with pre-installed OpenSSL, so you don’t have to follow extra steps for installation.

Let’s generate the self signed SSL certificate files.

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/vsftpd.pem -out /etc/ssl/private/vsftpd.pem -subj '/CN=localhost'

This above command will create a 365 days valid self signed SSL cert files at /etc/ssl/private loacation.

Once you’ve created the certificates, open the vsftpd configuration file again:

sudo nano /etc/vsftpd.conf

Now, add the two lines.

rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem

Next, Enable SSL by changing the setting ‘no’ to ‘yes’ the line below.

ssl_enable=YES

After that, add thefollowing lines to explicitly deny anonymous connections over SSL and to require SSL for both data transfer and logins:

allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES

For, more robust security let,s enable TLS, by adding the following lines:

ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO

Finally, we will add two more options. First, we will not require SSL reuse because it can break many FTP clients. We will require “high” encryption cipher suites, which currently means key lengths equal to or greater than 128 bits:

require_ssl_reuse=NO
ssl_ciphers=HIGH

NOW, save the file by pressing ctrl + x then y, enter

Now, we need to restart the server for the changes to take effect:

sudo systemctl restart vsftpd

Step 8: Opening Ports in Google Cloud Firewall

In this step we’ll open some port in Google cloud Firewall. Without this you can not successfully connect to your FTP server. 

On your Google Cloud  dashboard and click the hamburger menu in the upper left-hand corner of the screen. Then scoll down to VPC network then click the Firewall rules.

After that, press the CREATE FIREWALL RULE button.

Now set the ‘Tagets’ to ‘All instances in the network’, then set the ‘Source IP ranges to 0.0.0.0/0. Lastly in the ‘Protocols and ports’ field, setect tcp and type the following  ports and ports ranges -20,21,990,40000-50000 .

After that click the Create button to save the settings.

Step 9: Test and Connect

To connect to your Google cloud ftp server you need to set up an FTP client on your local computer. Though all web browser such as Google Chrome, Firefox, Opera etc support ftp but their feature are limited. That is why I recommand you to use FTP client application like Filezilla, Winscp, Cyberduck.

 

First, Open your Google cloud dashboard and copy your VM’s external IP address.

For the sake of this guide, I will use Filezilla Ftp client application.

Download filezilla by clicking here.

After you have installed Filezilla. Open it and navigate to File>> Site manager>> New site.

For, Connect as nomal FTP (without SSL):

Now, paste the external ip address on the Host field. Then select the Encryption as ‘Only use Ftp(insecure).

After that type your user name and password. Next, Press the Connect button.

For, Connect Ftp over SSL:

Now, paste the external ip address on the Host field. Then select the Encryption as ‘Use explicit FTP over TLS if available’.

After that type your user name and password. Next, Press the Connect button.

For, Connect as Anonymous:

To Connect as Anonymous user, paste your server external IP address on the host field, then select  Anonymous from the Logon Type field. After that click the connect button.

Step 10: Open for all FTP server (optional)

Many times readers could or find exactly  what they’re looking for in tutorials.  In the previous steps you learn  to create  a Ftp server which is only accessible by Linux users or only  ‘userlist_file=/etc/vsftpd.userlist’ listed users and the readable and the writeable directoty are /home/tom and /home/tom/upload.

So, lets view some vsftpd.conf  example which may be more suited for your requirement.

If you don’t want to restricted user to  only /home/tom directory then  add a # before  the settings

#chroot_local_user=YES

Change the line to make /  default directory.

local_root=/

Anonymous Login:

If you want to share a particular directory to everyone then uses those below line :

# Allow anonymous login
anonymous_enable=YES
# No password is required for an anonymous login (Optional)
no_anon_password=YES
# Maximum transfer rate for an anonymous client in Bytes/second (Optional)
anon_max_rate=30000
# Directory to be used for an anonymous login (Optional)
anon_root=/example/directory/

If you want to disable anonymous upload then add those lines:

anon_upload_enable=YES

And if you want your anonymous users to create directories, you will need:

anon_mkdir_write_enable=YES

Now it is your time!

I tried my best to provide you a complete tutorial on how to set up an FTP server on Google Cloud. I hope you liked it.

If you need help just drop a comment.

If you benefited from this tutorial, and would like to support my work, please like my Facebook page.

Thanks

Connect as ROOT via SFTP on Google Cloud

If you are using FTP or Sftp as normal user on Google Cloud then you are losing your precious time. 

By connects as root user via SFTP, You can save half of your time on google cloud. 

You can access and modify any file you want, never face permission denied error again.

Let’s get started,

Join Us

Facebook

Twitter

YouTube

Here are the steps for this tutorials,

  • Generate SSH keypairs
  • Add Public key to metadata
  • Modify SSH setting on your Linux VM. 
  • Setup a SFTP client on your Desktop or MAC.

Want Exclusive Tutorials?

Step 1: Generate SSH keypairs

The first step is to create the key pair (Public key and private key) with “root” username on Your Pc or mac.

For windows,

Open puttygen  and  click on generate button. Then type “root” on key comment field. After that click the “save private key” button and save the private key on your desktop. Next, copy the public key.

If you need detail or step by steps guide then follow this blog post.

For Mac,

 On Mac OS , navigate to Go >> Utilities >> Applications >> Terminal.

 Then, execute the command below to generate public and private keys.

ssh-keygen -t rsa -C root -f ~/Desktop/id_rsa

-t rsa = To generate RSA keys.

-f ~/Desktop/id_rsa = “-f ” command is to store those file in a particular directory. Here I  want to store those SSH keys pair in Desktop folder as id_rsa (names of the ssh key file).

Execute that command below to copy the public key.

pbcopy < ~/Desktop/id_rsa.pub

Step 2: Add Public key to Metadata

Now, Log in to Google cloud account. Then, Click on the hamburger menu in the upper left-hand corner of your Google Cloud Platform dashboard. After that, navigate to Compute Engine >> Metadata >> SSH keys.

Now,  Click on Add item and Paste the  Public key which you copied from the previous step. Then  click on the save button.

If you need detail or step by steps guide then follow this blog post.

Step 3: Modify SSH setting on Linux VM.

After you have added the public key in  Google cloud metadata, Navigate to VM Instances. Next click on the “SSH” to open the ssh terminal. You also can use putty for this job.

Now it is time to edit the SSH config file  to allow remote root login. By default, Remote root login is disabled.

Execute this command to edit the ssh config file.

sudo nano /etc/ssh/sshd_config

There are two option for you to choose,  either you can use remote root login with root password or without root password. I recommand you to use it with root password.

It’s up to you whether you want to use password or not.

With root password:

Within that file, find the line that includes PermitRootLogin and modify it to ensure that users can only connect with their SSH key and root password:

/etc/ssh/sshd_config
PermitRootLogin yes 

After that, save the file by pressing  ctrl+o then ctrl+x or ctrl+x >> y >> enter.

Next, To put these changes into effect:

sudo systemctl reload sshd.service

For without root password,

Within that file, find the line that includes PermitRootLogin and modify it to ensure that users can only connect with their SSH key:

/etc/ssh/sshd_config
PermitRootLogin without-password

After that, save the file by pressing ctrl+o then ctrl+x or ctrl+x >> y >> enter.

Next, To put these changes into effect:

sudo systemctl reload sshd.service

Step 4: Setup a SFTP client on Your Destop or MAC.

To complete this step you should have already download and setup a Sftp client on your windows or mac pc.

If you don’t know how to do it then follow this blog and install application such as  Filezilla, Winscp, Cyberduck.

Now, Browse and select the private key (which you save by following the first step) from your pc.

In the above image I am using Filezilla and adding the private key which was created by puttygen application.

For, With root password:

As you can see on the above Image I typed my root password in the ‘password’ field.

If you don’t know your root password then follow this blog.

For, without root password:

If you have changed the ssh config setting to ‘without-password’ then you don’t need to use a password on the  ‘password’ field. Just type root on the user field and click on the connect button.

In the above image I successfully connected as root via SFTP on google cloud VM.

Now it is your time!

I tried my best to provide you a complete tutorial on how to connect as root via SFTP  on Google Cloud. I hope you liked it.

If you need help just drop a comment.

If you benefited from this tutorial, and would like to support my work, please like my Facebook page.

Thanks,

3 Ways to Solve Sftp or Ftp "Permission denied" on Google Cloud

If you are getting permission denied errors while transferring or editing files over SFTP or FTP  then you are in the right place.

In this Article you will find 3 ways to deal with those annoying problems.

Before starting this tutorial, you should have already configured an sFTP client to work with your VM on Google Cloud Platform.

So let’s get started.

Contents

Want Exclusive Tutorials?

Join Us

Facebook

Twitter

YouTube

1. Cause for the permission denied error.

First it is not a bug or system error, it is a mistake which you are making. You are trying to access a file without having proper permissions.

Let me explain a bit, In Linux OS system every file or folder belong to some user and groups. This type of system is there for better security. Unauthorized users and groups can not modify or ever read  a file or directory. Only the ROOT user have the privilege to access any file or folder in the systems,  other user cannot access root user’s files or folders. But root user can access any other users file or folders.

Now back to the cause of the problem,  when you log in to your Google cloud VM via SSH Sftp using software like filezilla, winscp, cyberduck  you’re using a username which does not have that particular file or folder access authority.

NOW, what is the solution? 

There are 3 ways you can avoid this error. 1) Use root user, root has the highest access authority, you will never face this error if you log in as root. 2) Change that file or directory permission to all user. 3) logged as the user of that file or directory.

 

2. Error---

a) Google Cloud Filezilla permission denied error.

If you are seeing this error below,

Error /...  : open for write:permission denied

Error File transfer failed

It means that you don’t have a proper permission to modify or upload that files.

To solve the permission error problem follow the below tutorial.

b) Google Cloud Winscp permission denied error.

When you are trying to upload or modify something and an error pop-up like the above image.

Error,

Permission denied.
Error code: 3
Error message from server: Permission denied.

It means that you don’t have proper access permission on that files or folders.

To solve the permission error problem follow the below tutorial.

a) Google Cloud Cyberduck permission denied error.

Permission denied. Please contact your web hosting service provider for assistance.

41 OPENDIR
42 READDIR
43 READDIR
44 CLOSE
45 OPEN

If you saw a error like the above , It means that you don’t have  access permission to modify that file.

To solve the permission error problem follow the below tutorial.

1st Solution : Upload and move files to desire location.

As I told you earlier,  normally only the owner of the file or directory can access that file or directory.

When you update your metadata on google cloud dashboard, Google cloud create a new user on Linux VM. And the user’s default directory is under /home/your-ssh-public-key user-name.

So, If you upload a file there and later move the file to your desire  destination then you don’t face permission denied error.

 

a) Upload the file

Noted down or remember the user id which you are using to connect via SFTP.

Using your SFTP client, go to /home/your-user-id. In the above Image I am using filezilla ftp SFTP client, and my sftp user id is “username”.

Upload a file in your /home/your-user-id directory.

In the above image I uploaded a png image in /home/username directory.

b) Move the file to desire directory

Login to your google cloud account and click the hamburger menu in the upper left-hand corner of the screen.Next goto Compute Engine. Then goto VM Instances. After that click on SSH button.

Now execute the below command on your terminal.

cd /home/your-user-id

Check the available files

sudo ls

Now move the file to your desire directory

sudo mv /path/to/location

In the above image I am moveing  siteyaar-logo.pnp to /opt/bitnami/apps/htdocs directory.

Now open your SFTP client and check the file.

2nd Solution : Login as root user while using Sftp clients

If you ask me, Login as root user while using Sftp or ftp clients is the best way to use your Google cloud’s VM.

Root user  has the privilege of accessing all the other user’s files. So when using as root user, you will never face the permission denied error again.

This fix will work with any Ftp or  Sftp clients such as filezilla, winscp , cyberduck etc.

Login as root user while using Sftp is a topic of it own, that is why I wrote a separate blog of it. Here is the link.

3rd Solution : Change the file or folder permission

By changing file or directory(folder) permission, you can easily fix you permission denied problem. But if you don’t use it property it can leave a door open for hackers.

Every other website on google suggest you to use these methods but i don’t recommand it, it is unproductive and time-consuming.  I recommand you to use 1st solution.

The process goes like this 1)check the file or directory permission, 2)change the permission to 777, 3) upload or modify a file, 4) change the permission to default.

By using 2)and 3) can solve your problem.

But You should  change the permission to default after the process  for better security.

a) Open SSH terminal

Login to your Google cloud account and click the hamburger menu in the upper left-hand corner of the screen. Next go to Compute Engine. Then go to VM Instances. After that click on SSH button.

b) Check the file or directory default permission

Execute the below command to get the default permission number for that particular file.

sudo stat -c %a  path/to/file/location

In the above Image I am checking the permission of wp-config.php file.

Execute the below command to get the default permission number for that particular directory.

sudo stat -c %a  path/to/directory/location

In the above Image I am checking the permission of htdocs directory.

c) b) Change the file or directory permission to 777

Now, execute the command to chage the permition 777.

sudo chmod 777 /path/to/file/location

In the above Image I am changing the wp-config.php permission to 777.  The number 777 means anyone can use or modify that file.

Now, execute the command to change the permission 777.

sudo chmod 777 /path/to/directory/location

In the above Image I am changing the htdocs directory  permission to 777.  The number 777 means  that now anyone can upload  files to that directory .

d) Upload or modify files

After, you have changed the permission to 777, now it is time to upload or modify that file or directory.

In the above image I am trying to modify wp-config.php using Filezilla SFTP FTP client.

e) Change the file or directory permission to default permission

Now change the file or directory to its default permission. It will close the security hole, which you created by changing it permission to all(777).

Execute the below command,

sudo chmod default-permission-number /path/to/location

Warning: If you use “-R” with chmod command it will change all the files to 777 in  a directory.  Don’t use it if you don’t have an idea about WordPress file and folder permission structure.

4th Solution : Login as bitnami (For bitnami WordPress user)

This trick only work on bitnami WordPress. On bitnami WordPress  all the file under /opt/bitnami is owned by bitnami, so if you use bitnami as  user name while login via sftp or ftp, you can avoid the error.

a) Use bitnami as username while creating ssh keys

Create an RSA SSH key pair using puttygen and type bitnami in the Key comment section. If you don’t know how to create ssh key pair, then follow the tutorial in the link.

b) Connect to sftp using bitnami user name

Now, add the public key to your VM instances Metadata or SSH key section. If you don’t know how to does it then follow the link. 

After that open you SFTP ftp client and connect to google VM using ‘bitnami’ as user.

Now it is your time!

I tried my best to provide you a complete tutorial to fix permission denied error on Google Cloud. I hope you liked it.

If you need help just drop a comment.

If you benefited from this tutorial, and would like to support my work, please like my Facebook page.

Thanks,

How to Host and Install Bitnami WordPress on Google Cloud (2019)

In just 5 minutes, you will learn all necessary hosting and installation details about Bitnami WordPress on google cloud platform.

So, Let’s get started.

Host Bitnami WordPress on Google Cloud

If you familiar with google cloud then you already know that GCP has multiple version of WordPress, among them Bitnami WordPress is the best options for beginners. It comes with pre-configure security settings which is recommended for your WordPress website.

Because google cloud is a cloud platform you have to configure every thing by yourself. You can find many tutorials which will taught you how to install or launch bitnami WordPress on Google Could but never give you an answer on how to use bitnami WordPress as a primary hosting platform.

To host Bitnami WordPress on you have to follow the steps,

For simplicity, I break down those steps into multiple posts.

Follow the below tutorials to Launch Bitnami WordPress on Google Cloud.

How To Install Bitnami WordPress On Google Cloud

Installing Bitnami WordPress on GCP is one of the easiest part of this tutorial. Just few clicks here and there and you install Bitnami on Google Cloud.

To install Bitnami WordPress on google cloud follow the steps,

  1. Login to Google Cloud Platform
  2. Launch Bitnami WordPress
  3. Configure Your Virtual Machine
  4. Login to Bitnami WordPress.
  5. Disable Bitnami banner

That all you need to install Bitnami on google cloud.
So let’s get started.

1. Login To Google Cloud Platform

If you don’t have a Google cloud account click here and create a new account. 

After that, Login to your Google Cloud Console.

2. Launch Bitnami WordPress

On the upper left-hand corner of your screen, Click the Hamburger menu.

After that, Click and Open The Marketplace.

Next, Type ‘Bitnami WordPress’ on the search bar and press Enter. 

As you can see on the image that Google Cloud has multiple versions of bitnami WordPress.

Click on the link (button) which has those words- ” WordPress Crafted by Bitnami”. 

Next, Click on the ‘Launch on Computer Engine’ button to deploy the bitnami WordPress on Google Cloud.

3. Configure your Virtual Machine

Now carefully select a zone.

Google has their  cloud  servers in all over the world. They divided the geographical location  as ‘Regions’. Each region is subdivided into several zones. For example, the us-central1 region in the central United States has zones us-central1-a, us-central1-b, us-central1-c, and us-central1-f. For more information, click here.

Now choose your computer engine as per your budget or need.

Now chose you Boot Disk Type.  Stander Persistent Disk  is  Mechanical type hard disk  thus it is much slower than SSD Persistent Disk. Go for SSD Persistent Disk.

Make sure to check the boxes to allow HTTP and HTTPS traffic.

Click Deploy  To install your server. It will take some time (5-10 m).

4. Login To Bitnami WordPress

Click on the ‘Log into admin panel’ and use the Admin user and Admin password to login into your WordPress admin panel.

5. Disable Bitnami banner

This step is optional, If you annoyed by the bitnami banner, on the bottom right corner of your screen just follow the below step to remove it.

At first, Open the SSH terminal by clicking the SSH button.

Then execute the  following commands to remove the bitnami banner.

sudo /opt/bitnami/apps/wordpress/bnconfig --disable_banner 1

Restart the Web Server.

sudo /opt/bitnami/ctlscript.sh restart apache

FAQs

What is Bitnami?

Bitnami is a 3rd party  Application provider. Their Application Catalog contains a growing list of 140+ trusted, prepackaged applications and development run times ready-to-run anywhere. Choose from single VMs, multi-tier applications, container images, or Kubernetes Helm charts.

Forbes stated, It is a kind of like the Boy Scout who helps the little old lady cross the street.

Bitnami WordPress  is a great way to start your WordPress website. Since it has pre-configured settings, your work is cut short and you can avoid the headaches that come with configuring. Their pre-configured settings are base on industry’s best security practices.

What is Bitnami WordPress Stack?

Bitnami is providing you all necessary library of installers or software packages to run a WordPress website on a Google cloud VM instances, which they called as Bitnami WordPress Stack.

Bitnami WordPress stack contains Ghostscript, Apache, ImageMagick, lego, MySQL, OpenSSL, PHP, phpMyAdmin, SQLite, Varnish, WordPress, WP-CLI etc software packages.

Bitnami WordPress Stacks:

There are  about four different WordPress Stacks are  available in google cloud.

WordPress Multisite Certified by Bitnami: This version can host and manage multiple websites from the same WordPress instance. These websites can all have unique domain names and can be customized by their owners, while sharing assets such as themes and plugins that are made available by the server admin.

WordPress with NGINX and SSL Certified by Bitnami: This version run on NGINX web server application. Other three  are run on Apache web server.

Bitnami WordPress Multi-Tier: Multi-Tier versions is for large website which have separate database servers.

You need minimum 3 VM instances to run this version.