Scaling with microservices

Independent scalability is a major benefit of microservices, a challenging thing to implement. Scale microservices with a focus on users' priorities.

IT teams can face several challenges when scaling micro services based applications.

With a monolithic application, IT teams can carry out straightforward, well-established tactics to scale both vertically and horizontally. A load balancer can allocate traffic across various resources as needed. If there is too much of a load on the application, teams can even spin up new instances of the application to create more room for workloads.

A monolithic application is deployed as a single unit behind the load balancer. All you need to do is add more resources as transaction volume increases. However, the components of this monolith often scale inter-independently, so you might need to deploy more resources for the entire application even if you only experience demand for one individual component.

On the other hand, a micro services based application comprises a collection of loosely coupled services built to run on a mix of platforms. Because of the distributed nature of a micro services based architecture, IT teams must scale traffic differently than they would with a monolithic application. They must devise scalability strategies that protect micro services based applications from unexpected outages and help maintain fault-tolerance.

Scaling concepts

Teams must have a grasp on scalability and the reasons for it.

The scale cube, established in The Art of Scalability by Martin L. Abbott and Michael T. Fisher of AKF Partners, is a three-dimensional scaling model that illustrates three approaches to application scaling. The scale cube's X, Y and Z axes represent the three different scaling approaches. The traditional monolithic scaling method that replicates application copies falls along the X axis. Micro services based application scaling or types of scaling that break monolithic code fall along the Y axis. Lastly, Z-axis scaling involves the strategy of splitting servers based on geography or customer base in order to strengthen fault isolation.

Ways to monitor and optimise performance

End-user performance is the most important aspect of a micro services based application. Users notice slow and unintuitive application performance immediately. Even if a team uses the best technologies and tools to build a micro services based application, that IT strategy doesn't pay off if there is no improvement in user experience.

Teams should prioritise application performance and the end user's perspective to efficiently address micro services scaling issues. To prevent performance problems in a micro services based application, take advantage of an application delivery controller providing Layer 7 load balancers that adeptly facilitate scaling automation. Choose systems that are designed to handle micro services applications. Use them to monitor and optimise the performance of services in real time.

To scale a micro services based application effectively, teams must also track performance and efficiency goals. An effective monitoring system alongside a scaling strategy can help maintain optimal performance for a micro services based application.

Tracing the problems

All software application teams should take advantage of logging; however, tracing is difficult to carry out in a micro services based application. A micro services architecture comprises several services and service instances, likely spread across multiple systems.

Every service instance has the capability to write log data, such as errors and load balancing issues. The application support staff must aggregate the logs, and search and analyse them when needed.

Allocate resources appropriately

IT teams should remember that resource availability and allocation play a vital role in scaling a micro services application. Resource allocation specifically presents several challenges. The first layer, or the hardware layer, must be appropriate for the micro services ecosystem. Prioritise particular micro services for CPU, RAM and disk storage allocation.

Teams must understand scalability problems, set achievable goals with qualitative and quantitative consideration, and then apply resource-appropriate measures with a view toward performance and the end user's perspective.

about us:

kitsune (https://www.getkitsune.com)

kitsune is a cloud-native framework which enables developers to create full stack serverless web applications without having to worry about architecture, scalability and maintenance. kitsune also provides a HTML based language for developers making it the simplest way to build serverless web apps.

CIO's must digitise internal IT processes to keep pace with change

Be more adaptable than ever before to help businesses keep in step with the pace of innovation

The pace of change is faster than ever, which means CIOs need to think differently about the role of IT in the enterprise.

Traditionally, the effort of the IT department complemented the goals of business. The IT function developed the systems and software to support new business processes. Projects used to be run top-down with the executive management’s strategy for the business executed through the use of software-powered business processes. Projects took a waterfall approach, ran over several years and usually involved the deployment of major pieces of enterprise software infrastructure.

But the risks associated with major implementations failing, or the business changing before the systems were fully deployed, has seen a rise in different approaches to IT, where new functionality is delivered at a faster rate.

This pace of change is being driven by the way the web has evolved. Gartner believes the worldwide web is entering its third phase: in the 1990s, for the majority of users, the web was read-only; at the start of the 21st century, social media gave users a writeable web, allowing people to share comments, videos, pictures and “likes” with anyone who wanted to follow them; and now we are “on the cusp of Web 3.0, where the web becomes executable”.

IT needs to evolve into a connected intelligent architecture.

Digital transformation redefined

Rather than thinking of the IT that supports business as a fixed set of software and hardware, the software is part of a continuous development and continuous integration process, while hardware is swapped in and out and workloads shift between on-premise data-centres and the public cloud, based on business requirements.

Some would argue that this is what a digital transformation means for the IT department – instead of delivering IT projects, its role becomes about delivering capabilities on a continual basis, which enable the business to adapt and take advantage of new innovations.

To be successful, organisations need to be able to fail fast and work in an iterative and innovative way.

The CIO needs an entirely new strategy, where change is the only constant.


about us:

kitsune (https://www.getkitsune.com)

kitsune is a cloud-native framework which enables developers to create full stack serverless web applications without having to worry about architecture, scalability and maintenance. kitsune also provides a HTML based language for developers making it the simplest way to build serverless web apps.

Encourage frameworks to stay on-course...

An example from the airline industry

Despite turbulence and other conditions keeping airplanes off-course 90 percent of flight time, most flights arrive in the correct destination at the intended time.

The reason for this phenomenon is quite simple — through air traffic control and the inertial guidance system, pilots are constantly course correcting. When immediately addressed, these course corrections are not hard to manage. When these course corrections don’t regularly happen, catastrophe can result.

For example, in 1979 a passenger jet with 257 people on board left New Zealand for a sightseeing flight to Antarctica and back. However, the pilots were unaware that someone had altered the flight coordinates by a measly two degrees, putting them 28 miles east of where they assumed to be. Approaching Antarctica, the pilots descended to give the passengers a view of the brilliant landscapes. Sadly, the incorrect coordinates had placed them directly in the path of the active volcano, Mount Erebus. The plane crashed into the volcano killing everyone on board.

Error of only a few degrees brought about an enormous tragedy.


This is common across every large organisation

Consider flights as an analogy - organisations also have processes and decision makers to steer the organization. Teams make decisions to move northwards with the clear know-how of trusted resources (people and technology) they mostly take informed decisions to stay its course… while this worked for several decades we today see unicorns in the country causing disruption and thus the need for transformation.

A proposed framework for modern organisations

Ask your resources / teams who have unconventional interests and encourage them to spend 20 percent of their time working on whatever they believe will benefit your customers both internal and external and do so by using the right resources (unconventional and innovative technology)

Consider the below split of our work life (blood corpuscles)

We suggest encourage a framework to strike the right balance like human bodies need both blood corpuscles, organisations can thrive using the formulae or like Google has been doing for years.


about us:

kitsune (https://www.getkitsune.com)

kitsune is a cloud-native framework which enables developers to create full stack serverless web applications without having to worry about architecture, scalability and maintenance. kitsune also provides a HTML based language for developers making it the simplest way to build serverless web apps.

A serverless web-server

improving your application performance

Today most web-apps are hosted on web servers like Apache etc. These web-servers are deployed virtual/physical servers in a data-centres. The only way you scale is via adding a load-balancer on top and having the right idle infrastructure which can be added as requests increase.

What changed with cloud?

Lets take example of AWS. AWS today provides global components like CDN, Lambda@Edge, S3 etc.

With CDN you can delivery the content to your customer from nearest data-centre of your cloud provider.
With Lambda@Edge you get 5 seconds of compute time at nearest data-centre of your cloud provider.
With S3 you get access to globally synced file storage system scaled in real-time.

The Serverless web-server

With these components available (content delivery, compute, storage) on cloud providers, imagine a web-server which

  1. enables automatic content delivery via CDN.

  2. identifies slow changing vs fast changing content and uses a global storage for faster response

  3. leverages the edge-compute to execute the business logic of the request.

With kitsune you can enable this in matter of minutes

  1. Import your existing application as an endpoint into a project

  2. Publish the project to the cloud provider of your choice!

kitsune would automatically create a serverless web server on the cloud and use your existing application as the origin. The runtime would first trigger a deep crawl to identify (and copy if required) slow changing vs fast changing content of your application. It then would leverage a global storage (like S3 on AWS, Blob Storage on Azure) to store the slow changing content and accelerate the responses.

kitsune would also expose APIs for you to trigger a fresh deployment or manage the environment.

If you would like to experience a serverless web-server - request for demo below:

Get a Demo


Adding a cookie at viewer_response using Lambda@Edge

a simple piece of code, but figuring out the right way to handle cookies was painful !

This might seem quite simple but it took us quite some time to figure out the solution. Hence we decided to share it with the community :)

It started with a simple requirement of inserting a security token before every response that goes out of CloudFront (CDN of AWS). Hence the obvious architecture was to intercept the response at viewer_response event and attach it to a Lambda@Edge compute.

But the challenge came while we tried to add a new cookie and ensuring that it does not mess the cookies the origin might be trying to also set.

Here is the code which finally worked:

let finalCookieArray = [];
if(response['headers']['set-cookie']){
  for(var cookie of response['headers']['set-cookie']){
    finalCookieArray.push(cookie.value);
  }
}
finalCookieArray.push(`new-c1=${cookie1}; SameSite=Strict;`);    
finalCookieArray.push(`new-c2=${cookie2}; SameSite=Strict;`);
response['headers']['set-cookie'] = [{
    'key': 'Set-Cookie',
    'value': finalCookieArray
}];

The above code would ensure that whenever the origin is trying to set a cookie, it does not override the header and instead appends the new cookie to the outgoing response.

This technique is useful for user-authentication scenarios, session-timer management, watermark or secure fingerprinting etc.


about us:

kitsune (https://www.getkitsune.com)

kitsune is a cloud-native framework which enables developers to create full stack serverless web applications without having to worry about architecture, scalability and maintenance. kitsune also provides a HTML based language for developers making it the simplest way to build serverless web apps.

Loading more posts…