How to kill a stuck kubernetes namespace

If you are an hands on person dealing with cloud related technologies, such as container and kubernetes clusters, the following script could avoid you same headaches.

While playing with kuberentes, if it has not alredy happend to you, soon or later, you will stumble in a kubernetes namespace that remains in the “terminating” state for ever.

Usually this happens when a certain application is uninstalled in the wrong way or some weird bug led to this awkward situation.

If you don’t want, or you can’t, rebuild the entire cluster from scratch, forcing the deletion of the namespace can be a good option.

To force the deletion of a stuck namespace, you can try to run the following script by passing the problematic namespace. The only requirement is to have the kubectl command with the kube context properly configured against the target kubernetes cluster.



die() { echo "$*" 1>&2 ; exit 1; }

need() {
which "$1" &>/dev/null || die "Binary '$1' is missing but required"

need "kubectl"

test -n "$STUCK_NS" || die "Missing namespace: kill-ns <namespace>"

kubectl get namespace $STUCK_NS -o json | tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/"  | kubectl replace --raw /api/v1/namespaces/$STUCK_NS/finalize -f -

Tutorial: Integrating Gitlab and Jenkins

This is a short tutorial showing how to integrate Jenkins and Gitlab.

There are many uses cases and customized development processes established across teams. For the sake of the discussion, I will reference the following development process that could be used in those environments with high focus in having high quality code on the main branch at all the times.

Typical Development Process

Usually the workflow starts as soon as a developer creates or updates a Pull Request. Some people set up processes by running the build, test & deployment from the feature branch. If all tests are passed, the Pull Request is considered a good candidate to be merged with the main branch.

Although it can work in many cases, this approach has a potential flaw since it could happen that after the Pull Request is merged, the new version on the main branch does not work as expected (e.g. build failure, test failures, deployment failures etc).

The key point is that the feature branch MUST be merged with the main branch before running any build/test/deployment and not using only the branch on its own.

This consideration increases the level of the confidence while performing code integration that it is especially crucial when it is performed automatically.

After the build/test/deployment completes, the pipeline outcome is reported back to the Pull Request by updating the GIT commit status.

If the pipeline failed, the merge feature is disabled to enforce the team to fix the issue by pushing additional commit or re-executing the pipeline if the problem was not code related.

The following diagram shows the example workflow.



The Jenkins Gitlab plugin is the best source of informations for supported versions as well as for community discussions.

My setup is based on the following stack:

  • Jenkins 2.89.1
    • Git Plugin 3.6.4
    • Gitlab Plugin 1.5.2
  • GitLab Community Edition 10.2.3


Jenkins Global setup

Configure the Gitlab plugin pointing to your gitlab instance as shown by the following example

Jenkins Job

Create a Jenkins job that will be used as quality gateway. It should be comprehensive so that all the typical stages are executed (build, test, deploy etc). If possible it should be implemented as Jenkins pipeline to leverage all the benefit of Infrastructure as Code paradigma. The following screenshots shows an example of a traditional Maven Jenkins job but the same can be easily translated in Jenkins pipeline.


The SCM must be configured with Git by pointing to the GIT URL where the project is hosted.

In order to build the feature branch and to perform the important branch operation before the build execution, use the following settings

  • Name: origin
  • refspec: +refs/heads/*:refs/remotes/origin/* +refs/merge-requests/*/head:refs/remotes/origin/merge-requests/*
  • branch specifier: origin/${gitlabSourceBranch}
  • Additional behaviour Merge before build
    • name of repository: origin
    • Branch to merge to: ${gitlabTargetBranch}

Build Trigger

The Jenkins job must be configured to start automatically as soon as our events occur. In this example workflow, we want to trigger a Jenkins build whenever of the two following events occur

  1. Create/Update Pull Request. We don’t want to start a build as soon as a push is performed against our branch (unless the Pull Request is already opened and the tile does not contain the WIP prefix) but only upon a Pull Request creation/update.
  2. A specific comment is added in the Pull Request. Sometime happens that a build fails and it must be restarted. Triggering it from Gitlab, will allow us to keep the Pull Request status in synch.

Jenkins should be configured like the following example:

Take a note of the Gitlab CI Service URL since it is required to configure properly Gitlab in the next step.

Update Commit Status

The last configuration is to tell Jenkins to update Gitlab (commit that triggered the build) with the build status.  Simply add the post build action as shown below:

GitLab setup

Project creation

Create a new project and upload it to Gitlab. The build process can be based on any tool (e.g. maven, gradle etc).

Gitlab should contain the source code, as well as the build scripts (e.g. pom.xml) and the Jenkins configuration such as the Jenkins pipeline (e.g. Jenkinsfile). This example shows instead the Jenkins configuration using the traditional web UI approach but the same concept applies if the job config is moved to the Jenkinsfile (reccomended).

Disable Gitlab Pipeline

Since we will be using Jenkins as CI tool, it is recommended to disable the Gitlab CI tool to avoid conflict in setting the commit status.

Navigate to Project / Settings / General / Permissions / Repository / Pipeline and disable it

Gitlab webhook

Once a Jenkins job is configured to kick off upon a Gitlab event, the Gitlab project must be configured as well to send those events to Jenkins.

Navigate to Gitlab project / Settings / Integration and create a new webhook like the following example. Make sure to use the URL shown by Jenkins Gitlab Plugin and select the interested events (In this example only merge requests and comments)


Everything is ready and should work as expected.

To test the setup, create a branch and push some changes. No build should start on Jenkins yet.

By creating a pull request, a Jenkins build should start automatically and the Pull request shows that the pipeline is running.

Gitlab shows under the Pull Request all Jenkins invocations (Gitlab pipeline) along with the status:

Pivotal Cloud Foundry at Home

This post is to share my experience in deploying Pivotal Cloud Foundry on my own server.

Cloud Foundry is one of the many PaaS (Platform As a Service) option available on the market and it comes in 3 different flavors:

  1. Cloud Foundry is open source and community based. It is possible to install on premises to offer a PaaS to the organization.
  2. Pivotal Cloud Foundry is a Pivotal version of Cloud Foundry. Starting from the base product, a number of features have been customized and enhanced with a commercial support available. Like the first option, it can be installed on premises.
  3. Pivotal Web Services is the Pivotal Cloud Foundry offered as a managed service. For those who wants to try it out or don’t want to maintain the platform, it is definitely the fastest and probably the cheapest option. In this case, there is nothing to install since the platform is maintained by Pivotal Web Services.

In my case, I have already an account with Pivotal Web Services and I already know how it works from a developer/user point of view.

Only for education purpose, I wanted to experience the deployment of Pivotal Cloud Foundry as well on my home network although I was fully aware that my hardware did not meet the requirements (not enough RAM on my server). I liked the idea to have a kind of Cloud at home (even for a short time).

By reading the official documentation of the Pivotal Cloud Foundry on vsphere environment, I’ve realized that I had to re-organize my current setup in order to have a chance of success.

In particular the issues were as follow:

  1. VMWare vCenter is not licensed as free product and its license cost would not justify a Home Lab setup like mine.
  2. Minimum 128 GB RAM while my ESXi has only 32 GB.

I resolved the first point by upgrading my ESXi 5.5 to the latest ESXi 6.5. By doing so, I received a full trial license for two months that I can use to connect a trial version of vCenter to my ESXi.

In addition, to isolate and optimize the performance of network traffic between Cloud Foundry VM, I’ve created a VLAN (new Port Group) connected to my existing virtual switch. I’ve created a dedicated subnet network ( for the Cloud Foundry Virtual Machines. Finally, I created a gateway VM to allow routing between the networks and (DNS server is on as well as all my home devices)

Following picture shows the Virtual Switch along with the new Port Group (VM-

To resolve the issue of the low RAM:

  • After installed vCenter appliance ony top of my  ESXi, I’ve moved it to a desktop PC running VMWare Player. In this way I saved 10 GB of RAM.
  • Downsize the resource configuration of VM within Pivotal Ops Manager as well as Pivotal Elastic Runtime
  • Configure external MySQL database running ony my QNAP instead of Pivotal built-in. Unfortunately it did not work for me. Probably due to a wrong version of MySQL. The issue was with the execution of a SQL flywaydb script.
  • Disable all errands

As shown by the following images, I managed to deploy it with still some memory available on my server

The Pivotal Cloud Foundry Virtual Machines visible under pcf_vms folder

The Virtual Machines have been configured with the following resources:

CloudU Certification

Cloud Computing as Digital Business revolution enabler

Digital Business Revolution

As an IT professional, could not be a better time for me to live in, as I am directly seeing the effect of today digital business revolution.

Every day we learn that new opportunities and new businesses are created thanks to the multitude of applications, services and, in more general way, the technology that can be leveraged.

Also traditional activities, that we have been doing for ages without any IT support, now have some sort of  IT system that is supporting it.

As a whole, technology has been an enabler to develop new business ideas with keys factors such as

  • Boom of mobile user population and the consequent business opportunities.
  • IoT  that every day shows a new use case.
  • Big Data & Analytics that allow to extract values from the huge amount of data that it produced every second.
  • Virtual world where with examples like Pokémon GO we saw how people are tending to live more the virtual world rather than the real life.
  • Cryptocurrency with Blockchain technology and its most popular Bitcoin implementation where several products and services are now only “bit” away.

Regardless the size of the business, every organization recognizes that flexibility and agility are paramount characteristics that cannot be overlooked, otherwise big are the chances that IT systems are no longer aligned to business objectives with the relative consequences.

In the recent years, the disruptive technology that enabled agility and time to market is the Cloud Computing.

One great benefit of the Cloud is that small companies and startup can give a try to their idea at a fraction of a cost.

With the Cloud computing paradigma, self starter people suddenly had the opportunity to implement their ideas without the need of an upfront investment to set up an IT infrastructure; They could quickly develop and deploy a new product and shutdown everything in case of unsuccessful experiment.

CloudU Certification

If you are interested in the Cloud topic, I suggest you to look at the CloudU certification program organized by Rackspace Cloud University.

By going through the course package, you’ll learn the fundamental of Cloud Computing that are valid across the several Cloud providers since it is Vendor neutral.

Once you’re ready, you can go through the online exam and (if you pass it) you’ll receive a certificate as I did.


Downloadable version

Value proposition for a Microservice Architecture in a serverless Cloud environment

Value Proposition for the Auto Scale feature in a Cloud environment

With the Cloud paradigm that it is getting every year more and more popular, the providers (AWS first and Azure, Rackspace etc later) are offering the Auto Scale feature highlighting its benefits.

In a nutshell, the auto scale feature allows to configure the infrastructure in such a way that, according to certain metrics, the system automatically add or reduce the underlying Virtual Machines, that are hosting your application, in order to smoothly meet the workload.  In contrast with a traditional cluster setup, where it must be decided upfront how many machines should be used for a certain cluster setup, with the auto scale feature there are the following benefits:

  1. No need to perform any estimation exercise to figure out the cluster size for the production deployment and no need to manually provision the hardware (physical or virtual). The cloud watches certain configurable parameters (e.g. CPU, network data in or out, Disk I/O etc ) and automatically add or delete the underlying Virtual Machines with the full stack to accomodate the user workload. Efficient usage of resources.
  2. Avoid costs for idle resources during low system load. Since the system can detect situations when there is no load (or low), the number of resources can be ramped down to the minimum  and pay a fraction.

A step forward: Serverless

There is no doubt that the Auto Scale feature is a step forward compared to the traditional setup but there is a new approach going under the name Serverless that in my opinion has a room for further grow.

An example of this paradigma is AWS Lambda or Microsoft Azure Funtions.

The idea is to write a piece of code that will be executed in the Cloud and the infrastructure is not a concern at ALL!

When we compare the Serverless approach to the classic Cloud Auto Scale Feature it is easy to spot the following limits:

  1. Although with the Auto Scale feature, it is not anymore a concern the setup of a large cluster to handle huge workload, there is still some thoughts to have on the infrastructure design as the auto scaling metric need to be configured. Depending on how the metrics are configured, the resource allocation will vary with the related costs. With Serverless approach every service request is executed in parallel on the common Cloud infrastructure where there is no need to instantiate a specific VM with the custom client’s software stack. 
  2. Although the Auto Scale can be configured to shrink the system in case of no workload, at minimum one Virtual Machine must be up and running at all the times to ensure the system remains alive. This means that there will be a cost associated to maintain the VM alive even if nobody uses the system for several hours/days. With Serverless approach, the charge occurs only when the code get executed.
  3. The Auto Scale feature seems more suitable for a traditional layered Architecture (Front End, Business, Data) to scale the entire deployment bundle (e.g. FE + Business when shipped together, web service module). Although it is possible to separate them, usually multiple business services are shipped together in a single deployment bundle (e.g. WAR module with all web services). If one service become busy it can trigger the auto scaling mechanism leading the deployment of the entire bundle with all services and not just the busy one. The Serverless approach promise instead to scale independently every single service/function that would make it a perfect match for a Microservice Architecture

Limit and Challenges of Serverless

Looking at the current offer of serverless implementation the first restriction is on the limited number of languages and dependencies injections.


  • Microsoft Azure provides support for C#, Javascript, bash, powershell, PHP and few more. No Java
  • AWS provides support for Javascript (Node.js) and Python. No Java

Probably Java will become available in the future.

It is clear that the serverless environment consist of pre-built environment runtime that are ready to execute the customer code.

For this reason, it is feasible for a Cloud provider to prepare under the hood an Auto Scaling cluster for each runtime that it is shared across all users and hence hide the infrastructure scaling details.


As I said, the Funtion code works well on top of pre-build environment but it is not possible to specify today any dependency required by your code. It means that your code can uses only the available libraries in the pre-built environment and the functionalities provided by external service calls (e.g. other function, other network remote service).

Potential Extension of Serverless approach

It would be great if a custom service (e.g. Java module with all its dependencies and runtime) could be deployed in a Cloud environment in a Serverless fashion with all related benefits (e.g. Charge occurs only when the code is executed and no costs at all during idle time).

As explained above, unfortunately this is not possible yet.

The main impediment consists on the fact that a custom software stack should be built for every single custom service and leave it idle at the cost of the Cloud provider.

The following trade off could make it technically feasible and maybe as well as a value add for the Cloud Provider offer.

  • Each custom service can be configured with a deployment bundle that allow the setup of the entire execution environment of the given custom service.
  • The Execution environment is not deployed until the first request. Since the environment deployment requires some time, a Circuit Breaker as Gateway middle component could provide a smooth temporary “out of service” response. It is accepted that by time to time the service is temporary not available. As soon as the environment is ready, the circuit breaker will forward the request to the up and running service and consequent requests will be processed by the deployed code. The Cloud Provider watches the usage of the service and it will keep the environment up and running until it won’t be used for a certain amount of time. According to the Serverless style, the Cloud provider will charge the client not based on the time the system has kept up and running but according to the Servless metrics (e.g. number of request, execution time etc). If the system is not used after a given timeout, the system is deleted and the next request will experience again the temporary out of service.
  • Different class of services can be defined based on several criteria (e.g. Timeout after which the system is deleted)


I believe that for those cases where temporary out of services are acceptable, it could be a good compromise to have a Cloud scalable system that it is even cheaper to the current offering.


To find out more about the Auto Scale feature and the differences across the biggest Cloud Providers, see the following references

The Reactive Manifesto

You may have already heard the expression “Reactive Systems” in the Software Architecture space.

If not, I’d suggest you to read the Reactive Manifesto ( and optionally to sign it, as I did.

It is nothing new but simply four characteristics of a software system that would denotate a system as a “Reactive” one.

  1. Responsiveness
  2. Resilience
  3. Elastic
  4. Message Driven

Enjoy your reading.

A general purpose architecture for background processes


The purpose of this article is to present a general purpose architecture for background processes. In this page, background process is used interchangeable with job.

Like any general purpose architecture, it cannot fullfil every single case since the requirements can be different across the different applications.

This architecture tries to address the requirements listed below and it should be adjusted accordingly whereas the specific case has a different set of constraint, requirements and wishes.


In order to have a general purpose architecture that can be applied in a high number of scenarios, the following requirements should be satisfied.

Separation of concern

The business logic performed by the job must be loose coupled from the launcher logic.

The laucher component is responsible just to invoke a specific job.

The launcher can be either a scheduler that automatically triggers the job according to its configuration or manually through an explicit trigger action

Multiple Execution Environment

A general purpose architecture should allow to run the jobs either in a standalone environment or in a container.

Cluster awareness

In case jobs are deployed and launched within a container (e.g. within a web application), they must support those scenario where the deployment is performed in a cluster environment.

If a scheduler is used as a launcher component to trigger a job, it must keep in account that the job, according to its scheduling, must be triggered only once and not multiple times because of the multiple nodes.


The architecture is based on a launcher (trigger) component that is responsible to run the job.

Since there could be different way how a job can be launched, the launcher component should contains multiple implemetations such as

  1. REST API: If the launcher is running in a container a REST endpoind or any other protocol could be used to trigger the job remotely
  2. Command Line Interface (CLI): A command line interface could be useful to trigger a job (either on a local machine or remotely) using shell commands
  3. Internal Java API: Once the launcher component is deployed along with an enterprise application, it could be useful to expose the launcher API so that other Java components will have the capability to trigger jobs

All different launcher implementations finally could rely on the popular Quartz Scheduler to actually trigger a job.

Among many other features, it supports deployments in cluster environments where the scheduler runs on multiple nodes but still the jobs are triggered only once.

The job itself can be either a typical Spring Batch job or any generic background process.








Networking at home

Sometime you need to do things just to have fun and not becuase there is a real need.

This is the case of my home network that I’ve over complicated ON PURPOSE.


  • Two different private LAN (,
  • 8 Wifi networks
  • Internal DNS service to resolve hostname under subdomain (
  • Internal DHCP service (Disabled the one provided by the router) that automatically update the entries in the DNS). Picture below
  • The two DNS machines are behind a clustered load balancer that redirect in Round Robin fashion the client’s  DNS queries.
  • Two NAS devices with over 9 TB available storage
  • Home server running 24×7 hosting around 20 VM with APC battery


Cluster of loadbalancer to server DNS queries (This is really over engineering and made only for fun!! )


My first AWS event on BigData & Analytics

I am glad that I joined the event powered by AWS about BigData and Data Analytics as it was definitely an interesting day.

I’ve found the session useful as it was characterised by a mix of architectural approach, demo as well as best practices when working with BigData and amazon web services (AWS).

A picture taken during the speech shows the aws cloud services that can be used across the typical workflow

Collect ==> Store ==> Analyze


With data analytics usually we have to deal with unstructured data that must be turned in structured ones  before they can be analysed.

The next picture shows instead how those services could be combined together to process data depending on their nature.


Particular focus was given on the following services:

My thoughts about Amazon Kinesis

Amazon Kinesis consists of a set of services to process stream data in the cloud.

The first service launched was Kinesis Stream where the data stream capacity (shard) must be determined at creation time and a charge will occur accordingly.

For further reading, the following source contains all the key concepts like the following diagram offering a visual representation of what Kinesis is:

Once the records are sent to the stream from the producers, the Kinesis applications (consumers) can consume them.

No autoscaling in Kinesis Stream yet

Surprisingly the data stream capacity does not automatically scale up/down but a manual stream resizing operation must be carried out to increase/decrease the number of shards of the stream whereas it is desired. This is a straight forward operation from AWS console but I suppose that an automatic scale up/down would be useful.

Partition key as mandatory input

It is arguable the decision to make the partition key a mandatory input to  get/put records from the stream. It is clear that the partition key is used to provide the record grouping feature but in my opinion it should be an optional one since a grouping/sorting might not be a requirement for each use case.

Kinesis Firehose

Leveraging Kinesis Stream, AWS has built Kinesis Firehose that allow the users to easily create a stream delivering data directly to Amazon Redshift or Amazon S3



With Firehose, in my opinion AWS has made a great step forward. While creating a delivery stream, there is no sizing to specify for the underlying shard nor any partition key to provide as input param since the service handles it transparently.

Why a temporary S3 bucket to deliver to Redshift?

My only remark regarding Kinesis Firehose is the temporary S3 bucket that must be specified upon its configuration against a Redshift target. In my opinion, the intermediate S3 bucket should be transparent to the user. Why a user should have an additional S3 bucket in his account just because Firehose has been implemented with an intermediate store in mind?

The following screenshot shows the concerned configuration:


Grid Computing for a better future

Everyone should donate some processing power

In this post, it’s not my intention to talk about the technology behind the grid computing as there are already a high number of sources where you can raise some awareness on this topic.

My main goal is instead to raise some awareness and promote the projects, relying on grid computing, with the aim to improve the human life such as discovering treatments for diseases such as HIV, Ebola, Cancer and so on so forth.

Just to give a minimum of context, it is enough the following minimalistic definition of Grid computing taken from wikipedia

Grids are composed of many computers acting together to perform large tasks.

In other terms, when there is a very complex computational problem to resolve, putting together hundred thousand of PC working together is in most cases the only feasible way to go.

The examples of uses cases requiring high computational power range from art, cryptographic, finance, games, maths, molecular science and so on.

For this reason, it was born the need to create a platform for the distributed computing to be used by the demanding projects.

One of the platform for the distributed computing that is becoming more and more popular is the one developed by the Berkeley University known as BOINC






Today there are a number of research projects that leverage BOINC and distributed computing to beat the cancer and many other global issues.

The key message is that all those projects need our support and we should all contribute as it is for a good cause.

All you have to do is to download a piece of software and let it run. It will receive some work and it will do it when your PC, laptop, phone is idle (not using any resource) and eventually it will return the results to the researchers who will analyse them.

I think that it makes no sense to write here any tutorial on how to proceed as the below links contain enough information on what to do but should you encounter any problem or have question on this topic, I’ll be delight to help you as this is my active contribution to this cause.

My Contributions

I write here the actions that I took to contribute to this cause. You can do the same or adapt to your possibilities.

 Download the BOINC software

The first step is to download the software from BOINC

I installed it on

  • my Android phone
  • laptop
  • deskptop
  • dedicated VM running on my VMWARE ESXi host(24×7)

Although you can configure the program to run when your device is not doing anything else, you need to be aware that there will be a slight higher power consumption as an intensive CPU work is not for free. This is all about donations.

In my case, the power consumption of a VM running 24×7 (limited CPU) is only few Watts


Join a Project

You can either join single projects or decide to use an Account Manager like BAM (as I did) where you manage centrally the projects that you wish to contribute to.

At current time, I’ve joined

My contributions are visible here

Spread the word

I wrote this article 🙂


Enterprise Architecture frameworks comparison

Before talking about Enterprise Architecture (EA) frameworks comparison, it is important having clearly in mind what EA itself is all about:

From Wikipedia

Enterprise architecture (EA) is “a well-defined practice for conducting enterprise analysis, design, planning, and implementation, using a holistic approach at all times, for the successful development and execution of strategy. Enterprise architecture applies architecture principles and practices to guide organizations through the business, information, process, and technology changes necessary to execute their strategies. These practices utilize the various aspects of an enterprise to identify, motivate, and achieve these changes

Large organizations embracing  EA practice should consider it as their own business with continuos development, resources and budget allocated for it.

Since a number of frameworks (claiming to provide guidelines on how to develop EA) have been created, I was curious  to analyze the differences between them. During my research, I came across the following comparison made by Pragmaticea

The comparison, carried on by Pragmaticea, analyzes four frameworks (MAGENTA, TOGAF, Zachman, PEAF) and for obvious reasons the best one results to be PEAF sponsored by Pragmaticea 🙂

I personally don’t agree with that analysis but what I’ve found very unfair is to place TOGAF as the least one.

I don’t want to be a TOGAF advocate but the following statement in their report, is quite arguable:

From comparison made by Pragmaticea

TOGAF is mostly concerned with IT rather than the entire organization

Answering to this point, I would refer to the Business Architecture (B) domain as defined in TOGAF


As far as I am concerned, according to TOGAF, the Business Architecture is all about business:

As a matter of facts the Business Architecture:

  • It focus on business capability.
  • It is owned by business people and not by IT people.
  • It is not concerned on IT execution.

After said that, it is true that TOGAF addresses as well the IT execution concerns but it is made from the phase E going forward and this function is usually owned by PMO (Project Management Office).

Moreover I’ve found this statement deceptive because we are all in the space of setting up a EA framework and the scope of EA (regardless of the framework adopted) is all about designing how the IT system should operate in order to align it to the business concerns.

From this perspective, I would say that TOGAF addresses fully this concern.

AWS Route53 Ip updater utility program

I started exploring AWS (amazon web services) back in 2007 and since then I’ve been keeping an eye at their services and offers as I suppose they play an important role in the cloud space.

This post is related to the AWS Route53 service that allow to leverage the distributed amazon DNS.
As an aws route53 user and owner of a domain, it is possible to easily associate domain names with public IP and, as usual, you get charged on usage basis.

It is possible to use aws route53 service to either point hostnames to EC2 components like instances and ELB (Elastic Load Balancer) or to external machines.

Since I have been using AWS Route53 to point my domain to a machine with dynamic IP, I was forced to login into the AWS console to update the configuration every time a new IP was assigned to my machine.

I was looking around for a utility like noip where you install a program that detect your public IP and update your AWS Route53 record.

Surprisingly I could not find anything that suits to my needs so I decided to write my own solution leveraging the AWS Route53 API.

I’ve implemented this small utility using Java and the source code along with the binaries are available to everyone for download on GitHub

Additional use case:

Another interesting use case of my utility is when you run one or more aws EC2 instance and you would like to point some hostnames to those instances.

One possible way to do so would be to reserve one or more aws elastic IP and then configure Route53 pointing your hostname to the reserved public IP.

Unfortunately, only the first elastic IP is for free as aws will charge for the additional ones.

Installing this utility program on your was EC2 instance will make the trick as the public IP associated to the was EC2 instance will be automatically registered on was route53 without the need to reserve a Elastic IP (and get charged) associated to the instance.

Of course this approach is only feasible for EC2 instance where you have full control and not for such services such as the load balancer.