Integra Networks, a Plurilock Company

Learn More
1

Get in touch with us

Address

2733 Lancaster Rd, Suite 220
Ottawa, Ontario,
K1B 0A9

Office hours

Workdays at
9:00am – 6:00pm
Call us
+1 866 657 7620
+1 613 526 4945

Let’s get connected

Get in Touch

The last decade has been quite disruptive in terms of the evolution of both applications and infrastructure paradigms. We have moved from monolithic applications to microservices and serverless functions. Equally, on the platform and infrastructure side, we moved from physical to virtual machines to containers.

Despite the tremendous progress in abstracting the layers of infrastructure from the application and provide platforms that the developers can deploy code directly to, standardization remains an always-relevant topic.

And while Kubernetes is considered by many as the de facto infrastructure block, supported by most of the on-premises hardware technology vendors (including Cisco with Container Platform) and the public cloud providers, it is not an ideal platform to run serverless applications.

This mainly has to do with the fact that serverless applications require ephemeral elasticity, which is per design not something that Kubernetes addresses. In other words, in the Kubernetes world, for a service to scale and do what a developer intended it to do, it must exist somewhere, running on a pod – therefore consuming infrastructure. However, in the serverless world, services can be defined, run and scale up instantaneously and spun down as required without needing to always run.

That gap between Kubernetes infrastructure and serverless applications is what Knative can fill.
But what is it and how does it work?

Knative is a Kubernetes-based open source software for deploying and managing modern serverless workloads. It can be installed on top of any Kubernetes-conformant platform and extend it to enable serverless applications to run by further abstracting the need to manage infrastructure in order to deploy them. The project is supported by many organizations, including Cisco.

It removes the significant complexity for the application developers by abstracting the Kubernetes and Istio components and providing coarse-grained, focused APIs to address the following everyday needs of application management such as routing and traffic management, ephemeral elasticity and auto-scaling, concurrency control, application versioning, blue-green deployment etc.

The two main components of Knative are serving and eventing. Serving is responsible for managing the Serverless application’s lifecycle on Kubernetes. It manages the configuration and desired state of the serverless workload including its code, point-in-time snapshots and service routing with traffic rules. Effectively, Serving enables containers to run and be treated by developers as scalable services. Eventing provides reactive serverless workloads for cloud native applications. It provides event delivery from event sources to event consumers, triggering the container-based services to run.

Finally, Knative also comes with a Command Line Interface (CLI) to manage the component’s resources (service, routes etc).

While there are organizations (and application use cases) that will be better suited to the additional control of infrastructure that the vanilla Kubernetes approach offers, Knative offers an opportunity to extend its capabilities and further abstract the application by creating a framework for developers to build and manage serverless applications.

Installation of Knative is straightforward; the only prerequisites are a Kubernetes cluster v1.14 and Istio version 1.3.5, as shown here: https://knative.dev/docs/install/.