An application designed specifically for the platform on which it has to run will be better in terms of performance, more resilient and easier to manage. Developing for cloud is no exception to this. The most remarkable service offered by cloud is the ability to create, run and maintain applications using its framework. Developing cloud architecture requires you to consider the various stages and components of application lifecycle management, the integrated development environments, test and quality management, security testing components and numerous crucial factors. This white paper presents some of the golden rules that you need to know, prior to joining cloud’s bandwagon.
A customer wishes to deposit a check by simply taking a picture of it in his/ her smartphone. The customer does not appreciate the idea of waiting in a long line at the bank.
Many companies are struggling to deliver faster results because they’re struck with their on-premise solutions that are not responsive, connected, or agile.
Welcoming cloud. But why? Because cloud offers the mobility, intelligence, agility, personalization and connected experience demanded by today’s digitally native customers.
Speed changes the game. You'll have to work smarter and faster to deliver unparalleled sales and service experience.
Customer expectations are changing as they eagerly look for solutions and experiences that resonate with them personally. Hyper connectivity across digital and social channels is persuading today’s customer to demand a highly personalized experience along all channels they use.
Responsiveness, personalization, connectivity, cost: there are multiple reasons for businesses to move to cloud. But you need to take into account several factors prior to building your applications in cloud. This article highlights a few rules you must know when you architect your first cloud-based application.
Cloud refers to accessing software applications through a network connection, often by accessing remote data centers through a WAN or internet connection. Almost all IT resources can reside in the cloud: a software application or program, a service or an infrastructure itself can reside in the cloud.
In simple words, a cloud application is a software program in which both cloud-based components and local components work together. The application has to depend on remote servers for processing the logic, which in turn is accessed through a web browser with uninterrupted network connection.
Cloud application servers are typically located in remote data centers, usually operated by a third-party cloud infrastructure provider. Cloud-based applications enable several operations such as file storage and sharing, email distribution, data collection and management, CRMs, financial accounting, order and inventory management, and so on.
Cloud empowers businesses in many ways. Some of the enterprise benefits of cloud are listed below:
Ease of use
Cloud computing enables on-demand network access to a shared set of configurable computing resources such as networks, servers, storage, applications and services. Automation, one of the biggest advantages of cloud reduces the management effort and service provider interactions. Ultimately, you’re getting rid of the burden of managing multiple layers such as physical data centers, operating systems and hardware. This helps you reduce costs and lessen the time spent on mundane IT tasks. Your focus shifts to more important goals like driving business innovation and scaling your applications on demand.
There are several best practices or rules you need to follow while designing your cloud-based application.
Cloud computing architecture includes the components and subcomponents required for cloud computing. Ideally, these components comprise a front end platform (fat client, thin client, mobile device), back end platform (servers, storage), a cloud based delivery model, and a network (Internet, Intranet, Intercloud). Cloud platform services support immediate scalability changes in the application. As a matter of fact, it is better to build your application to be as generic and stateless as possible. This keeps your application from being affected by dynamic scaling.
If an application is cloud-ready, it means that the application can be effectively deployed into a public or private cloud. That means, the application can take advantage of the capabilities of the PaaS layer on which it runs. Following these simple rules in your design will make your application cloud-ready, without needing to undergo a complete reimplementation. The same rules are helpful while migrating your existing applications to a dynamic cloud environment.
Instead of using the local file system as a store for temporary information, place the temporary information in a remote store such as SQL or NoSQL database. Reading static information from a file system is fine.
NoSQL is an approach to database design that can accommodate a wide variety of data models, including key-value, document, column and graph formats. NoSQL is an alternative to traditional relational databases in which data is placed in tables and data schema is carefully designed before the database is built. NoSQL databases support working with large sets of distributed data.
In most of the applications, the hardest state to eliminate is “session state”. It limits the scalability of the application. It is not advisable to store state on the local file system or permanent state in local memory. Unless an application can work seamlessly and rebalance work instantaneously while adding or removing a node, it cannot operate smoothly in the cloud.
The impact of the state can be minimized by storing it on a centralized location, in the server. It is difficult to eliminate session state completely. As a best practice, you can push the state out to a highly available store that is external to your application server. For instance, you can use a distributed caching store such as IBM WebSphere Extreme Scale, Redis, or Memcached, or an external database (a traditional SQL database or a NoSQL database).
In cloud, multiple copies of the application will usually be used to serve data. This will enable load balancing as well as failover. Even if one server fails, there will be other servers to take over the load. Application should be designed in such a way that there is no data or functionality loss in case of failure of a single server. Applications hosted in other servers should be able to take over the requests that were directed at the failed server till the time of failure.
When you write your logs to the local file system, it increases your chance of losing valuable information required for debugging problems. In a dynamic cloud environment, it's critical to have your logs available on a service that endures the nodes on which the logs were generated. For instance, PaaS vendors such as Heroku, Cloud Foundry, and PureApplication System add log aggregators that can be redirected remotely.
Most log frameworks have different log levels that allow you to customize the amount of information logged. If you know that your log information is going to be directed across the network, you might need to lessen the traffic overhead by reducing the log level to handle the volume.
This rule covers several aspects. For instance, it is not wise enough to assume that the services called by your application are at particular host names or IP addresses. Though service-oriented architecture is widely common these days, it is still easy to find applications that embed the details of the service endpoints they call.
When the services (peers) are relocated or regenerated within the cloud environment, and shifted to new host names and IP addresses, the code of the calling application breaks. Since you will be constantly updating and changing file properties, abstracting environment-specific dependencies into a set of property files may not be adequate.
Applications that are agnostic to clustering will be more resilient in the cloud environment. Consulting an external service registry to resolve service endpoints, or delegating the entire routing function to a service bus or a load balancer with a virtual name are considered better options.
It’s quite common that many Java developers still create their own threads and manage their own thread pools. While monitoring your application, you might realize the advantages of avoiding low-level infrastructure APIs. When you create your own thread pools, the cloud’s monitoring tools may not guide you in discovering thread bottlenecks. As a best practice, it is suggested to limit the range of APIs used in the application code. Also, shifting the responsibility of infrastructure services to the provider is advisable so that the layers of infrastructure can be updated without affecting the application.
Changing the infrastructure becomes more challenging when you start making assumptions about the infrastructure on which your application runs. Think about why your application code is calling an infrastructure service or API. Your application must focus on solving the business problem for which it is created, and should not deal with manipulating the infrastructure on which it runs. It’s better to leave the PaaS solutions in the PaaS layer and keep them out of your application code.
Resiliency is a must-have feature in the cloud, especially if you have to add or remove nodes under a load. You don’t need to build your own database connection model if the platform can provide it. By delegating the configuration inventory to the platform, applications based on HTTP, SSL, and standard database, queuing, and web service connections are going to be more resilient in the long run.
You must take steps to modernize and standardize any older or non-standard protocols. Moving to an HTTP-based infrastructure based on standards such as REST or even SOAP or WS simplifies porting of your system to a new environment. The asynchronous protocols such as IBM MQ or MQTT are still in vogue and are effective for application programming. So, take the minimalist approach and select the right tool for the task.
Applications that use standards-based services and APIs are more portable-friendly than those relying on specific OS features. There is a tendency to use OS-specific features when a high-level, OS-neutral version is available. What works in Linux or a UNIX derivative many not function well with Microsoft Windows.
You can solve this to an extent by using compatibility libraries that makes one OS “look like another”. For example, Cygwin is a compatibility library that offers a set of Linux tools in a Windows environment. Mono is a compatibility library that provides .NET capabilities in Linux. The best practice is to avoid OS-specific dependencies as much as possible and rely on your service provider or your middleware infrastructure provider.
Cloud environments are intended to be created and destroyed more frequently than their traditional counterparts. So, your application needs to be installed frequently and on-demand. Installation process must be scripted, with configuration data externalized from the scripts. The basic necessity is to capture your application installation as a set of operating-system-level scripts. Take advantage of the built-in scripting mechanism provided by your middleware platform, if available. If the application installation is small and portable, you can also take advantage of the different automation techniques such as Chef, Puppet, or patterns in PureApplication System.
Answering these questions is significant to address the installation challenges:
One of the most important factors to consider while developing applications for cloud is the legal requirements and the requirements of the application owner which binds the developer by way of a contract. More and more countries have begun enforcing data security standards. GDPR (General Data Protection Regulation) for EU (European Union) is a latest example. Developers need to consider the applicable laws while the software is being developed. Does the business fall under the purview of GDPR? If not, are there other legislations that apply to the business? Has the client requested any specific security requirements?
Need to ensure that the data storage and flow does not violate any of these. There may be a requirement that the data should not traverse outside of the EU. The server and persistent storage may actually be in EU, but if a third party service that is used to process the data, say an email service which sends notifications or a log analysis tool that collects the logs and analyzes performance statistics is located outside of EU, that results in a contract / legislation violation which might cause legal obligations on the developer.
In order to give your application a good home in the cloud, it’s important to decouple your data. In other words, private and public clouds work well with application architectures that are able to segregate ‘processing’ and ‘data’ into two separate components. To put it simply, you want to create application out of services, and you decouple data for the same reason. You can store and process the decoupled data in any public or private cloud sphere. For instance, many enterprises demand their data to remain on local servers, but they prefer to benefit from the commodity virtual machine instances within a public cloud.
Decoupling between services can be accomplished by adding a layer of technical abstraction, such as a message queue or a well written interface, between the content producer and the content consumer. Message queues decouple your processes. Here, the sender and the receiver should agree on a common format for messages and content and they should be using the same message broker.
Security is the topmost priority in cloud. It should be designed and built into the application’s architecture. Each application is designed to fulfil some specific business purpose based on the user needs. Application security practices differ from enterprise to enterprise.
On a higher level, we deal with three aspects of security that are mandatory for cloud development:
Focus on these three areas:
Regardless of the data being used or not, the security requirements do not change. While implementing encryption in flight, you need to encrypt and decrypt data before placing it on the network for remote consumption or before the application can use it. This calls for more processing time and increased cost. Hence, you need to have more clarity on the requirements, level of risk and potential loss, cost incurred etc.
The most practical one is encryption at rest. You really don’t know where or how your data is physically stored in the cloud. Here, you ensure that nobody can steal your data when it resides on a cloud-based storage system. It is good to do a risk-benefit analysis prior to proceeding with encryption at rest, since you might have to deal with extra resources and latency associated with encrypting the data.
Every domain faces different types of challenge; so does its security requirement. However, your security plan must cover these areas:
It refers to identity life-cycle management, centralized role management, access provisioning, workflow design, integration and implementation. With a streamlined identity management system, you can:
It refers to single sign-on services, federation services, role-based access, and platform access. This works jointly with identity management services and makes use of the identity information to grant access upon authorization.
It refers to governance services including compliance, role-based access control, and identity assurance.
Identity governance services are bound on a set of policies that include:
It refers to multifactor authentication, out-of-band authentication, and managed authentication services. Understanding the core components of the IAM system that you select is important here. Listing out the core IAM requirements and comparing the solution components offered by each IAM technology provider will help you pick the right solution.
Each IAM product is unique in terms of approach. Some are designed for cloud computing, while others fit into traditional approaches. The trial is to select the IAM technology that meets most of your demands.
This is something that many enterprises overlook. You need to look at every possible point of failure and formulate plan to face the adversity. Equally important is to decide what level of failure is acceptable and not (RPO and RTO). For instance, if a data center goes down, it is acceptable to provide “read-only” access to your servers for a few minutes, until you’ve promoted another database to “master”.
Compare the cost of engineering for failure tolerance against the business value of that failure tolerance. That will prevent you from over investing in activities not so important for your business.
While analyzing your customer’s project requirements, you need to take into account these four key elements in your cloud-based application design: scalability, availability, manageability and feasibility. Consider the consequences and trade-offs of any design decision, that is, what we gain versus what we lose or what is more difficult to implement or achieve.
The following questionnaire tries to sum up the considerations under each of these four elements. Answering them will help you improve your cloud design strategy and planning.
Scalability encompasses capacity, platform/ data and load.
Availability includes uptime guarantees, security, disaster recovery, performance, replication and failover.
|Replication and failover:||
Manageability involves monitoring the health and performance of the live system and handling deployments.
Feasibility takes into account the ability to deliver and maintain the system, within the right budget and time.
These are some of the questions that can assist your planning phase, but a full-fledged cloud design strategy for your business must encompass more thoughts, related to overcoming your present and future challenges.
Plenty of questions still remain unanswered regarding security within cloud and how both vendors and customers will manage their challenges and expectations. However, cloud has undoubtedly generated a great amount of interest in the IT marketplace, in the last decade. Following the footsteps of mainframes, minicomputers, PCs, servers and so on, cloud is creating history by radically changing the way enterprise IT operates. Cloud is an evolving as well as disruptive technology, and requires careful planning during adoption, implementation and maintenance.
Five cloud experts at Zerone Consulting: Midhun M (Senior Software Engineer), Sanal P S (Senior Software Engineer), Neethu Krishnan (Software Engineer), Sruthi Unnikrishnan (Software Engineer), and Anuja Cherian (Software Engineer). The team has worked on several projects and has delivered cutting-edge applications on cloud. A passionate group, these cloud professionals love to research and explore the latest technologies thriving the marketplace.
Interested in our services? Let us get in touch to offer customized services !
© 2005-2018 Zerone Consulting Private Limited. All Rights Reserved