DEVELOPING APPLICATIONS FOR CLOUD: WHAT YOU NEED TO CONSIDER

Authored by
Midhun M,
Sanal P S,
Neethu Krishnan,
Sruthi Unnikrishnan,
Anuja Cherian

Download PDF

DEVELOPING APPLICATIONS FOR CLOUD: WHAT YOU NEED TO CONSIDER Midhun M, Sanal P S, Neethu Krishnan, Sruthi Unnikrishnan, Anuja Cherian

Abstract:

An application designed specifically for the platform on which it has to run will be better in terms of performance, more resilient and easier to manage. Developing for cloud is no exception to this. The most remarkable service offered by cloud is the ability to create, run and maintain applications using its framework. Developing cloud architecture requires you to consider the various stages and components of application lifecycle management, the integrated development environments, test and quality management, security testing components and numerous crucial factors. This white paper presents some of the golden rules that you need to know, prior to joining cloud’s bandwagon.

Why cloud for business

A story to begin with:

A customer wishes to deposit a check by simply taking a picture of it in his/ her smartphone. The customer does not appreciate the idea of waiting in a long line at the bank.

The challenge:

Many companies are struggling to deliver faster results because they’re struck with their on-premise solutions that are not responsive, connected, or agile.

The solution:

Welcoming cloud. But why? Because cloud offers the mobility, intelligence, agility, personalization and connected experience demanded by today’s digitally native customers.

Lesson learnt:

Speed changes the game. You'll have to work smarter and faster to deliver unparalleled sales and service experience.

Customer expectations are changing as they eagerly look for solutions and experiences that resonate with them personally. Hyper connectivity across digital and social channels is persuading today’s customer to demand a highly personalized experience along all channels they use.

Responsiveness, personalization, connectivity, cost: there are multiple reasons for businesses to move to cloud. But you need to take into account several factors prior to building your applications in cloud. This article highlights a few rules you must know when you architect your first cloud-based application.

Business empowerment through cloud

Cloud refers to accessing software applications through a network connection, often by accessing remote data centers through a WAN or internet connection. Almost all IT resources can reside in the cloud: a software application or program, a service or an infrastructure itself can reside in the cloud.

In simple words, a cloud application is a software program in which both cloud-based components and local components work together. The application has to depend on remote servers for processing the logic, which in turn is accessed through a web browser with uninterrupted network connection.

Cloud application servers are typically located in remote data centers, usually operated by a third-party cloud infrastructure provider. Cloud-based applications enable several operations such as file storage and sharing, email distribution, data collection and management, CRMs, financial accounting, order and inventory management, and so on.

Why enterprises are turning to cloud

Cloud empowers businesses in many ways. Some of the enterprise benefits of cloud are listed below:

Customer data

  • 360-degree view of customer data
  • enhances collaboration among teams
  • quick insights with centralized data

Ease of use

  • easier to implement and integrate
  • natively mobile and social
  • greater stability with timely updates

IT support

  • flexible subscription plans
  • no additional hardware investments needed
  • flexible and scalable infrastructure

User experience

  • streamlined experience for all departments
  • connects all users on a single platform
  • offers remote accessibility

Innovation

  • future-proof solution
  • focus on innovation and business agility
  • analytics available to all

Cloud computing enables on-demand network access to a shared set of configurable computing resources such as networks, servers, storage, applications and services. Automation, one of the biggest advantages of cloud reduces the management effort and service provider interactions. Ultimately, you’re getting rid of the burden of managing multiple layers such as physical data centers, operating systems and hardware. This helps you reduce costs and lessen the time spent on mundane IT tasks. Your focus shifts to more important goals like driving business innovation and scaling your applications on demand.

There are several best practices or rules you need to follow while designing your cloud-based application.

Cloud development: the dos & don'ts

Cloud computing architecture includes the components and subcomponents required for cloud computing. Ideally, these components comprise a front end platform (fat client, thin client, mobile device), back end platform (servers, storage), a cloud based delivery model, and a network (Internet, Intranet, Intercloud). Cloud platform services support immediate scalability changes in the application. As a matter of fact, it is better to build your application to be as generic and stateless as possible. This keeps your application from being affected by dynamic scaling.

If an application is cloud-ready, it means that the application can be effectively deployed into a public or private cloud. That means, the application can take advantage of the capabilities of the PaaS layer on which it runs. Following these simple rules in your design will make your application cloud-ready, without needing to undergo a complete reimplementation. The same rules are helpful while migrating your existing applications to a dynamic cloud environment.

1- Never assume that local file system is permanent

Instead of using the local file system as a store for temporary information, place the temporary information in a remote store such as SQL or NoSQL database. Reading static information from a file system is fine.

NoSQL is an approach to database design that can accommodate a wide variety of data models, including key-value, document, column and graph formats. NoSQL is an alternative to traditional relational databases in which data is placed in tables and data schema is carefully designed before the database is built. NoSQL databases support working with large sets of distributed data.

2- Never keep session state in your application

In most of the applications, the hardest state to eliminate is “session state”. It limits the scalability of the application. It is not advisable to store state on the local file system or permanent state in local memory. Unless an application can work seamlessly and rebalance work instantaneously while adding or removing a node, it cannot operate smoothly in the cloud.

The impact of the state can be minimized by storing it on a centralized location, in the server. It is difficult to eliminate session state completely. As a best practice, you can push the state out to a highly available store that is external to your application server. For instance, you can use a distributed caching store such as IBM WebSphere Extreme Scale, Redis, or Memcached, or an external database (a traditional SQL database or a NoSQL database).

3 - Make your application ready for failover

In cloud, multiple copies of the application will usually be used to serve data. This will enable load balancing as well as failover. Even if one server fails, there will be other servers to take over the load. Application should be designed in such a way that there is no data or functionality loss in case of failure of a single server. Applications hosted in other servers should be able to take over the requests that were directed at the failed server till the time of failure.

4- Never log to the file system

When you write your logs to the local file system, it increases your chance of losing valuable information required for debugging problems. In a dynamic cloud environment, it's critical to have your logs available on a service that endures the nodes on which the logs were generated. For instance, PaaS vendors such as Heroku, Cloud Foundry, and PureApplication System add log aggregators that can be redirected remotely.

Most log frameworks have different log levels that allow you to customize the amount of information logged. If you know that your log information is going to be directed across the network, you might need to lessen the traffic overhead by reducing the log level to handle the volume.

5- Never assume any specific infrastructure dependency

This rule covers several aspects. For instance, it is not wise enough to assume that the services called by your application are at particular host names or IP addresses. Though service-oriented architecture is widely common these days, it is still easy to find applications that embed the details of the service endpoints they call.

When the services (peers) are relocated or regenerated within the cloud environment, and shifted to new host names and IP addresses, the code of the calling application breaks. Since you will be constantly updating and changing file properties, abstracting environment-specific dependencies into a set of property files may not be adequate.

Applications that are agnostic to clustering will be more resilient in the cloud environment. Consulting an external service registry to resolve service endpoints, or delegating the entire routing function to a service bus or a load balancer with a virtual name are considered better options.

6- Never use infrastructure APIs from within your application

It’s quite common that many Java developers still create their own threads and manage their own thread pools. While monitoring your application, you might realize the advantages of avoiding low-level infrastructure APIs. When you create your own thread pools, the cloud’s monitoring tools may not guide you in discovering thread bottlenecks. As a best practice, it is suggested to limit the range of APIs used in the application code. Also, shifting the responsibility of infrastructure services to the provider is advisable so that the layers of infrastructure can be updated without affecting the application.

Changing the infrastructure becomes more challenging when you start making assumptions about the infrastructure on which your application runs. Think about why your application code is calling an infrastructure service or API. Your application must focus on solving the business problem for which it is created, and should not deal with manipulating the infrastructure on which it runs. It’s better to leave the PaaS solutions in the PaaS layer and keep them out of your application code.

7- Never use obscure protocols

Resiliency is a must-have feature in the cloud, especially if you have to add or remove nodes under a load. You don’t need to build your own database connection model if the platform can provide it. By delegating the configuration inventory to the platform, applications based on HTTP, SSL, and standard database, queuing, and web service connections are going to be more resilient in the long run.

You must take steps to modernize and standardize any older or non-standard protocols. Moving to an HTTP-based infrastructure based on standards such as REST or even SOAP or WS simplifies porting of your system to a new environment. The asynchronous protocols such as IBM MQ or MQTT are still in vogue and are effective for application programming. So, take the minimalist approach and select the right tool for the task.

8- Never rely on OS-specific features

Applications that use standards-based services and APIs are more portable-friendly than those relying on specific OS features. There is a tendency to use OS-specific features when a high-level, OS-neutral version is available. What works in Linux or a UNIX derivative many not function well with Microsoft Windows.

You can solve this to an extent by using compatibility libraries that makes one OS “look like another”. For example, Cygwin is a compatibility library that offers a set of Linux tools in a Windows environment. Mono is a compatibility library that provides .NET capabilities in Linux. The best practice is to avoid OS-specific dependencies as much as possible and rely on your service provider or your middleware infrastructure provider.

9- Never install your application manually

Cloud environments are intended to be created and destroyed more frequently than their traditional counterparts. So, your application needs to be installed frequently and on-demand. Installation process must be scripted, with configuration data externalized from the scripts. The basic necessity is to capture your application installation as a set of operating-system-level scripts. Take advantage of the built-in scripting mechanism provided by your middleware platform, if available. If the application installation is small and portable, you can also take advantage of the different automation techniques such as Chef, Puppet, or patterns in PureApplication System.

Answering these questions is significant to address the installation challenges:

  • What is my minimum configuration?
  • Is it mandatory for the database to be available while installing the application?
  • Is there a better option for the application to start without its database?
  • Can I report the problem, and then increase function when the database becomes available?

10- Legal compliance / contractual obligations

One of the most important factors to consider while developing applications for cloud is the legal requirements and the requirements of the application owner which binds the developer by way of a contract. More and more countries have begun enforcing data security standards. GDPR (General Data Protection Regulation) for EU (European Union) is a latest example. Developers need to consider the applicable laws while the software is being developed. Does the business fall under the purview of GDPR? If not, are there other legislations that apply to the business? Has the client requested any specific security requirements?

Need to ensure that the data storage and flow does not violate any of these. There may be a requirement that the data should not traverse outside of the EU. The server and persistent storage may actually be in EU, but if a third party service that is used to process the data, say an email service which sends notifications or a log analysis tool that collects the logs and analyzes performance statistics is located outside of EU, that results in a contract / legislation violation which might cause legal obligations on the developer.

why data decoupling is important

In order to give your application a good home in the cloud, it’s important to decouple your data. In other words, private and public clouds work well with application architectures that are able to segregate ‘processing’ and ‘data’ into two separate components. To put it simply, you want to create application out of services, and you decouple data for the same reason. You can store and process the decoupled data in any public or private cloud sphere. For instance, many enterprises demand their data to remain on local servers, but they prefer to benefit from the commodity virtual machine instances within a public cloud.

Decoupling between services can be accomplished by adding a layer of technical abstraction, such as a message queue or a well written interface, between the content producer and the content consumer. Message queues decouple your processes. Here, the sender and the receiver should agree on a common format for messages and content and they should be using the same message broker.

Advantages of decoupling

  • Partial failure of architecture will not bring the entire system down
  • You can add the messages to the queue and process them when the system has recovered
  • If the message fails to get delivered, it can be redelivered until the message is processed
  • Decoupling enables distribution of workload and offers more scalability

cloud application security

Security is the topmost priority in cloud. It should be designed and built into the application’s architecture. Each application is designed to fulfil some specific business purpose based on the user needs. Application security practices differ from enterprise to enterprise.

On a higher level, we deal with three aspects of security that are mandatory for cloud development:

  1. Encryption
  2. Identity and Access Management (IAM)
  3. Design for failure

Encryption

Focus on these three areas:

  • Encryption in flight: The need to secure data as it flows from one system to another. Perhaps, this is where data becomes most vulnerable.
  • Encryption at rest: The need to secure data existing in a storage subsystem, raw storage, or in a database.
  • Encryption in use: The need to secure data that an application accesses and manipulates. Data services allow you to access data in use through layers of abstraction.

Regardless of the data being used or not, the security requirements do not change. While implementing encryption in flight, you need to encrypt and decrypt data before placing it on the network for remote consumption or before the application can use it. This calls for more processing time and increased cost. Hence, you need to have more clarity on the requirements, level of risk and potential loss, cost incurred etc.

The most practical one is encryption at rest. You really don’t know where or how your data is physically stored in the cloud. Here, you ensure that nobody can steal your data when it resides on a cloud-based storage system. It is good to do a risk-benefit analysis prior to proceeding with encryption at rest, since you might have to deal with extra resources and latency associated with encrypting the data.

Identity and Access Management (IAM)

Every domain faces different types of challenge; so does its security requirement. However, your security plan must cover these areas:

  • Identity management services
  • Access management services
  • Identity governance services
  • Authentication services
Identity management services

It refers to identity life-cycle management, centralized role management, access provisioning, workflow design, integration and implementation. With a streamlined identity management system, you can:

  • Define core identities for all resources/ actors
  • Regulate their access to enterprise systems
  • Have a centralized mechanism to store and manage identities
  • Leverage operational excellence
Access management services

It refers to single sign-on services, federation services, role-based access, and platform access. This works jointly with identity management services and makes use of the identity information to grant access upon authorization.

Identity governance services

It refers to governance services including compliance, role-based access control, and identity assurance.

Identity governance services are bound on a set of policies that include:

  • how you manage identities including everyone’s role?
  • how you link identities with compliance policies?
  • how to execute governance controls?
Authentication services

It refers to multifactor authentication, out-of-band authentication, and managed authentication services. Understanding the core components of the IAM system that you select is important here. Listing out the core IAM requirements and comparing the solution components offered by each IAM technology provider will help you pick the right solution.

Each IAM product is unique in terms of approach. Some are designed for cloud computing, while others fit into traditional approaches. The trial is to select the IAM technology that meets most of your demands.

Design for Failure

This is something that many enterprises overlook. You need to look at every possible point of failure and formulate plan to face the adversity. Equally important is to decide what level of failure is acceptable and not (RPO and RTO). For instance, if a data center goes down, it is acceptable to provide “read-only” access to your servers for a few minutes, until you’ve promoted another database to “master”.

Compare the cost of engineering for failure tolerance against the business value of that failure tolerance. That will prevent you from over investing in activities not so important for your business.

cloud design consideration questionnaire

While analyzing your customer’s project requirements, you need to take into account these four key elements in your cloud-based application design: scalability, availability, manageability and feasibility. Consider the consequences and trade-offs of any design decision, that is, what we gain versus what we lose or what is more difficult to implement or achieve.

The following questionnaire tries to sum up the considerations under each of these four elements. Answering them will help you improve your cloud design strategy and planning.

Scalability

Scalability encompasses capacity, platform/ data and load.

Capacity:
  • Is there a need to scale individual application layers?
  • Does scaling of individual application layers affect the system’s availability?
  • How quickly should I scale individual services?
  • How to add additional capacity to the application or to its part(s)?
  • Is the application supposed to run at scale 24x7?
  • Is it possible to scale-down the application outside peak hours (outside business hours/ weekends)?
Platform/ Data:
  • While working at scale, is it possible to operate within the limitations of the selected persistence services?
  • Standing within the limitations of the persistence platforms, how can I divide my data to enable scalability?
  • How to check if I am using platform resources effectively?
  • Can I collapse tiers to minimize resource usage and internal network traffic?
  • Does collapsing tiers affect scalability and future code maintainability of the application?
Load:
  • How to improve the design to overcome bottlenecks?
  • Which operations need to be handled asynchronously to help balance load at peak times?
  • How to use the platform features for rate-levelling and load balancing? (eg: Azure Queues, Service Bus, Azure Traffic Manager, Load Balancer)

Availability

Availability includes uptime guarantees, security, disaster recovery, performance, replication and failover.

Uptime guarantees:
  • What Services Level Agreements are the products expected to meet?
  • If I am planning to use different cloud services, should they all conform to the SLA levels as needed?
Security:
  • In the case of hybrid application, how to secure the link between my corporate and cloud networks?
  • How to control access to the cloud provider’s admin portal?
  • What are the local laws and jurisdiction prevalent in the region where data is held?
  • Should I consider the countries where failover and metrics data are held?
  • How to handle the security patches and updates provided by both the vendor and the operating system?
  • How far does service- decoupling and multi-tenancy affect my application’s security?
  • How to restrict access to databases from other services?
  • How to manage regular password changes?
  • Is there a need for federated security? For instance, ADFS with Azure Active Directory
Performance:
  • Which areas of the system are highly contended and likely to cause performance issues?
  • Are there any traffic spikes that can result in performance issues?
  • How far is it possible to address performance issues using auto-scale and queue-centric design?
  • Can I make any parts of the system asynchronous to support seamless performance?
  • How to measure and identify the acceptable levels of performance? What happens if the performance goes down that level?
Replication and failover:
  • Which part(s) of the application will be most impacted by a failure?
  • Which part(s) of the application gain the advantages of redundancy and failover features?
  • Are data replication services required? What are its benefits?
  • What are the restrictions specific to geopolitical areas? What is the area-wise availability of services?
  • How to prevent the replication of corrupt data?
  • Does a failure recovery exert unwanted pressure on the system? How to combat that situation?

Manageability

Manageability involves monitoring the health and performance of the live system and handling deployments.

Monitoring:
  • What are the steps taken to monitor the application?
  • Does the monitoring/metrics data comply with data protection policies?
  • Where is the monitoring/ metrics data stored physically?
  • Whether to use off-the-shelf monitoring services or create our own?
  • How much data is produced by the monitoring plans?
  • How to access metrics and data logs? Is there a plan to effectively utilize the increasing data volumes?
  • Is there a need for both auditing and logging?
  • Is it okay to lose some of the metrics/ logging/ audit data?
  • Is there a need to alter monitoring level during runtime?
  • Is automated exception reporting required?
Deployment:
  • How to automate deployment and check if the deployment was successful?
  • How to restore an unsuccessful deployment?
  • How many environments are needed, such as development, testing, production etc.)??
  • Should each environment be available 24x7?
  • How to deploy to each of these environments?
  • Does each environment require separate data storage?
  • How to deploy/ redeploy/ patch without interrupting the live?

Feasibility

Feasibility takes into account the ability to deliver and maintain the system, within the right budget and time.

Feasibility:
  • What skills and in-house experience are required to design and build cloud applications?
  • Does the design’s budgetary constraints and timeframe support build the application?
  • How to meet the SLAs? Is it possible to sensibly reduce the scope, SLA or resilience?
  • What are the acceptable trade-offs?

These are some of the questions that can assist your planning phase, but a full-fledged cloud design strategy for your business must encompass more thoughts, related to overcoming your present and future challenges.

Conclusion

Plenty of questions still remain unanswered regarding security within cloud and how both vendors and customers will manage their challenges and expectations. However, cloud has undoubtedly generated a great amount of interest in the IT marketplace, in the last decade. Following the footsteps of mainframes, minicomputers, PCs, servers and so on, cloud is creating history by radically changing the way enterprise IT operates. Cloud is an evolving as well as disruptive technology, and requires careful planning during adoption, implementation and maintenance.

About the Authors

Five cloud experts at Zerone Consulting: Midhun M (Senior Software Engineer), Sanal P S (Senior Software Engineer), Neethu Krishnan (Software Engineer), Sruthi Unnikrishnan (Software Engineer), and Anuja Cherian (Software Engineer). The team has worked on several projects and has delivered cutting-edge applications on cloud. A passionate group, these cloud professionals love to research and explore the latest technologies thriving the marketplace.

Sources:

Contact us
Close Get In Touch

Interested in our services? Let us get in touch to offer customized services !

Thank You. Your message has been successfully submitted.
Oops, an error occurred! Please try again.

© 2005-2018 Zerone Consulting Private Limited. All Rights Reserved