A use case of microservices with FeathersJS: building a geospatial platform

Luc Claustres
The Feathers Flightpath
8 min readJun 10, 2019

--

You might be aware of Geographic Information System (GIS) designed to store, retrieve, manage, display, and analyze all types of geographic and spatial data. At Kalisio we develop Open Source geospatial software, that is to say software that manages geolocated assets but in a more friendly and business oriented way than GIS usually provide. We have build a strong ecosystem composed of various tools and applications providing dozens of web services to deliver our solutions as SaaS:

  • Kaabah, a solution to build and operate Docker Swarm infrastructures.
  • Kargo, a Docker based solution to deploy geospatial services.
  • Krawler, a minimalist Extract-Transform-Load (ETL) tool (more details in this article).
  • Weacast, a platform to gather, expose and make use of weather forecast data (more details in this article).
  • KDK, a kit to simplify the development of geospatial web applications.
  • Kano, a map and weather forecast data explorer in 2D/3D.
  • Akt’n’Map, an application to manage real-time events on the field.
High-level view of our platform

At the beginning, most of these tools and applications were almost independently deployed/operated. In this article we detail how we have scaled our platform to make the required exposed services talk to each others transparently.

Problem statement

Most of our products, which rely on FeatherJS for the backend/API, have been developed as a set of loosely coupled modules to prevent building a monolithic piece of software, ensure SoC (Separation of Concerns) and ease maintenance, at least from a source code perspective. The built-in service layer helps decoupling the business logic from how it is being accessed based on a simple and unambiguous interface.

Although nothing prevents modules to be deployed in a microservice architectural style, all our applications were initially deployed as monolithic pieces (i.e. hosting all required services in the same process). Indeed, it is the easiest strategy in a first place and covered 99% of our initial use cases. Kaabah can replicate and load-balance different instances of our applications if we require to handle more workload and we simply use feathers-sync to synchronize service events.

This approach is something between the true monolith and the true microservices architecture, i.e. you scale your entire application but not its underlying services according to their workload.

Problems arose as we started to integrate services proposed by a given application into another one. For instance, Weacast provides an API to access weather data that is used by our Kano application. Kano also exposes an API to access data scraped by some background Krawler jobs (e.g. k-vigicrues). We wanted to access all these services as well in our Akt’n’Map application. Last but not least, we manage dedicated infrastructures with different instances (i.e. configuration) of these solutions for different customers.

Possible approaches

I must confess I have some trouble with the microservices trend. Indeed, splitting things up with a well-defined boundary and interface is probably something as old as programming a.k.a. modularity. Using microservices is more a deployment issue than anything else: deploying modules in different process spaces rather than in the same process space. Note that the processes can be located on the same physical host (e.g. when deploying services as containers) or different physical hosts (e.g. when deploying services in a cluster). Of course this changes how you code a little bit because you need to use some inter-process communication layer but this does not fundamentally change the logical architecture of your software. Microservices can probably be viewed as a mean to make your logical architecture dynamically fit best the shape of your physical architecture. But different strategies may be used to simultaneously tackle the problem of scaling and sharing.

Microservices do not really change the logic of your software © https://aws.amazon.com/microservices/

The most simple approach is probably to have a single source of truth (SSOT) for the data you’d like to share (i.e. a single database), which requires you to setup e.g. a MongoDB replica set at scale and configure your services accordingly. Your applications still need to host all services but some of them can target the same database so that data pushed by one application can be seen by another one. This can work well for simple CRUD operations but is hard to maintain with more complex logic because you will need to dispatch all events for real-time scenarios and run hooks in all applications to ensure a consistent behavior, leading to a lot of duplicated code.

On the backend, another approach is to split your API on a per-responsibility basis, e.g. each module runs in “light” app instance(s), and communicate with each other through different Feathers clients if they need to. Indeed, making the frontend orchestrate the different calls to the API would make the system highly unreliable not to say introduce high latency. You can deploy a frontend application serving as an API gateway to ease the client-side logic. If you create a private backend internal network, internal API apps don’t need authentication at all since the frontend application will implement it and filter queries according to users authorizations. As a consequence, you can rely on simple proxies like http-proxy-middleware, otherwise express-gateway or AWS api-gateway can tackle more complex scenarios. However, all of this requires manual work, creates a tight coupling with your underlying infrastructure and will not allow auto-scaling unless you have some discovery mechanism.

From a developer perspective, the best approach is probably one where each microservice instance is automatically aware of others, can access “remote” Feathers services and related events as they do internally with “locally” defined services, so let me introduce you feathers-distributed !

Our way forward microservices

This plugin relies on cote and takes benefits of it:

  • Zero-configuration: no IP addresses, no ports, no routing to configure
  • Decentralized: no fixed parts, no “manager” nodes, no single point of failure
  • Discovery: services discover each other without a central bookkeeper
  • Fault-tolerant: don’t lose any requests when a service is down
  • Scalable: horizontally scale to any number of machines
  • Performant: process thousands of messages per second

When the plugin initializes the following is done for your app:

  • creates a publisher to dispatch its locally registered services to other nodes.
  • creates a subscriber to be aware of remotely registered services from other nodes.

What is done by overriding app.use is the following:

  • each local Feathers service of your app creates a responder to handle incoming requests from other nodes.
  • each local Feathers service of your app creates a publisher to dispatch service-level events to other nodes.

What is done when your app is aware of a new remotely registered service is the following:

  • creates a local Feathers service acting as a proxy to the remote one by creating a requester to send incoming requests to other nodes.
  • this proxy service also creates a subscriber to be aware of service-level events coming from other nodes.
This diagram summarizes what feathers-distributed does

Practical implementation

When using a per-responsibility approach we ended with the following pseudo-code in our Kano application to use e.g. our Weacast API:

So far so good but as you can see authentication adds some complexity, although here we assumed that all apps share the authentication secret, which is probably not the most secured option if you need to publicly expose all your APIs. As you start integrating more APIs, clients stack up leading to more complex code. Let’s see how feathers-distributed can improve this.

Add it to your applications is just a matter of integrating two lines of code in your backend when you create your Feathers application:

...
import distribution from '@kalisio/feathers-distributed'
let app = feathers()
// Add distribution plugin with required options
app.configure(distribution({ ... ))

Now the pseudo-code in our Kano application to use e.g. our Weacast API looks like this:

And the underlying Weacast API doesn’t need to be publicly exposed if not required. Quite simple isn’t it ?

Our Kano application displaying weather data coming from the Weacast application using feathers-distributed

Last but not least, in order to segregate raw data that are not application-specific and ensure optimal performance, we created a dedicated infrastructure called ODK (Open Data Kit) comprised of:

  • a shared database running on a MongoDB Atlas cluster accessed by different services of different applications,
  • batch Krawler-based jobs that feed the database by aggregating and transforming data coming from different Open Data third-party providers.
A simplified view of the internal architecture of our platform

Cloud-ready implementation

feathers-distributed aims to be as zero-conf as possible thanks to cote. Therefore, the discovery backend is usually invisible to the developer (e.g. in localhost environment). It doesn’t require queue protocols and service registry software by clever use of IP broadcast/IP multicast systems. However, most cloud infrastructures like Amazon EC2, Google Compute Engine and Microsoft Azure state that IP broadcast and multicast are not supported.

You can still have the same functionality with Docker Cloud’s Weave overlay networks or Redis. We have selected the latter because of its universality, it’s then just a matter of making a Redis server available to your different hosts and define the COTE_REDIS_DISCOVERY_URL environment variable for your applications, quite easy if you use Docker as we do.

Can we go further ?

Most of the time microservices avoid a backend monolith but the fronted remains one, which largely reduces the benefits of microservices in big web applications. As a consequence, a similar microfrontends approach naturally emerged for the frontend side.

Microservices are mostly used in backends but UIs tend to remain monolithic © https://www.redhat.com/fr/topics/microservices/what-are-microservices

The main problem we wanted to tackle was a way to easily create a 2D/3D map views, similar to what Kano proposes, in others business applications. Using our modular approach we could have reuse our mapping module and implemented similar features in business apps. Of course this would have lead to some duplicated code not so easy to maintain. Moreover, this would have required to completely rework the UI on-top-of the map view using a different technology (e.g. React instead of Vuejs), while the UI was already fine as is if providing some simple configuration options (like e.g. the theme color). The higher the number of applications requiring Kano’ features the higher the final integration cost using this approach.

Thus, among the different possibilities, we have chosen to isolate Kano as a micro-app embedded into others applications using an iframe and rely on Window.postMessage API to coordinate. However, this low-level API has some restrictions:

  • 100% fire-and-forget
  • no way of getting a response from the other window
  • can only send strings through it
  • Internet Explorer (even Edge) doesn’t support it

We have selected the lightweight Paypal’s post-robot library to overcome this limitations. It is then just a matter of binding our internal API using post-robot, you will find more details in our documentation.

Example of a Kano embedded view (Vuejs frontend) into another application (ExtJs frontend)

If you liked this article feel free to have a look at our Open Source solutions and enjoy our other articles on Feathers, the Kalisio team !

--

--

Digital Craftsman, Co-Founder of Kalisio, Ph.D. in Computer Science, Guitarist, Climber, Runner, Reader, Lover, Father