Cloud Run by Google: Simple, Serverless and Scalable Greatness

by Jul 2, 2019#Cloud

Printer Icon

How to provide the right infrastructure to a project that scales well, is robust and secure are not straightforward tasks. Google Cloud has taken significant steps in delivering solutions where the supporting infrastructure is automatically managed. One of the recent proposals is Cloud Run, where not only we take advantage of a docker containerized app but also no management of virtual machines or Kubernates engine is required. This could be a good fit for a wide range of apps since the migration effort is very low and the maintenance task of keeping the infrastructure updated could be significantly reduced

As an example, we tried Cloud Run with one of our dockers deployed images, which was running under a VM machine inside Google Cloud. On that machine we had dockers installed running three containers NGINX, LetsEncrypt, and a Tomcat server.

So first we needed to create a new service. On the Google Cloud console Cloud Run is now one section in the main menu. The first thing you will notice is that if you are already using the Container Registry, the images are automatically populated. Our Tomcat Docker image was already using port 8080, so there was no need to change anything. Then we configured the memory allocated for that app and started the service. The service started very fast and was as responsive as the Cloud VM machine. Moreover, the services are exposed as https, so no additional https configuration was required for this example.

Once the server is deployed, you have some nice tools to review your service. It shows metrics, where you can review the number of request, CPU and memory usage; revisions; and logs.  Updating with a new version is straightforward too. Once a new version of the image is uploaded, the same service can be updated. There is no need to create a new service or do any other process, and a nice console will show you the history of your service.

And finally the cost, it seems to scale well too, because it is based on the CPU usage of the services. You will probably miss the full control of your server or your command line tools to access the container itself, but if you want a truly serverless solution for your stateless service, this could be a good choice.