
While the usage of the term “Cloud” for virtualized and distributed computing rose to prominence in the Telecom and Networking industry during the 90’s, the meaning of cloud computing has broadly evolved into a set of guidelines and promises for the consumers of such facilities:
- On-demand availability of computing resources without direct management by the user
- The responsibility of running hardware and software infrastructure resides with the cloud providers, who invariably operate at scale, often termed hyper-scale
- Leverage economies of hyper-scaling by sharing unlimited pool of resources in an elastic manner, thereby aiming to bring down total costs of ownership
- Benefit from the tech innovations as they occur and are implemented by the cloud providers
- Participate in a shared security and availability model with cloud providers in operating world-class services
Deployment models
Over the last two decades cloud computing has emerged into a preferred operating model. Organizations are designing their usage of cloud resources by adopting one of the following models:
Private Cloud – This cloud infrastructure is operated for the sole use of a single organization. It is managed internally or outsourced to third parties and could be hosted on-prem or in another hosted environment.
Establishing a private cloud requires significant efforts in virtualization of existing and new resources. Owning a data-centre is highly capital intensive and raises vulnerability. This model demands an availability of a physical space with controlled operating environment, periodic refreshing of the infrastructure and management intensive protocols.
Public Cloud – It is characterized by a set of services made available by a third-party provider to multiple customers in a shared fashion. This type of cloud allows for a level of resource scalability that is far beyond the needs of a single consumer or an organization. Operational expenses in the pay-as-you-go model replace capital expenses.
Public cloud is typically designed with built-in redundancies to prevent data loss for smooth and fast disaster recovery.
Hybrid Cloud – A hybrid cloud implementation spans across public and private, connecting managed or dedicated services to components hosted in the public cloud.
A hybrid implementation allows an organization to either extend the capacity or the capability of a cloud service, by aggregation, integration or customization with another cloud service depending on data privacy, co-location requirements, temporary expansion of capacity, creating replicas and partitions.
Multi Cloud – This refers to the use of multiple cloud services in a single heterogeneous architecture to reduce dependency on a single vendor.
The multi cloud scenario generally appeals to those end users who are concerned about cloud vendor lock-in and want to be able to move their applications easily to a different cloud provider, or even repatriate them to on-prem deployments, while designing their services and architecture.
Multi cloud is generally limited to independent software vendors who have a vested interest in running their software in as many environments as possible.
Inter Cloud – Less commonly used but are increasingly of interest to consumers seeking more advantageous pricing models, specific tools not available from a particular cloud service provider, risk mitigation through the use of multiple providers, avoid vendor lock-in and at times, to address data sovereignty requirements through diversified data locations.
This model generally introduces more complexity into the deployment including access and identity management, compatibility friction between auxiliary services provided by each provider, changes to data flow patterns and sometimes added costs thereof.
Cloud deployment challenges
Although cloud is becoming mainstream and finding wide-spread acceptance across industries, that does not imply that all workloads will be placed on the cloud. Technical incompatibilities, bulk of business logic still provides good services as part of mainframe applications, government regulations and at times changing economics encourage some of the workloads to continue to remain in an enterprise’s data-centres, driving enterprises toward a hybrid cloud future.
Some of the challenges that enterprises are facing in the current models of mixing public, private clouds can be summarized as:
- Co-location needs of applications and components with respect to data flow, both in volume and direction. Volume will impact the expected latency and performance while direction will impact the cost
- Costs involved in going across public network boundaries and within regions of public cloud are high.
- Dedicated high speed links between data-centre and public cloud add to cost overhead.
- The economics available to the customer, operating a hybrid environment are hampered by the reduced scale and elasticity of the services.
- Need for deep understanding of complex implementation requirements and testing needs.
- Innovation poses a problem as the consumer tries to remain in sync with the public cloud providers where services evolve much faster.
- The use of different control planes to administer enterprise-owned services and the public cloud services breaks the consistency of operations.
- The public cloud concept is centrally located; whereas a distributed cloud model is inherent to its local operations, which may not be offered at many customer locations.
- Geopolitical issues are leading to increasing national concerns about connections to the global internet, including censorship, security, privacy and data sovereignty.
Role of emerging technologies
In the last couple of years, the world has seen an unprecedented growth in edge computing. The focus has aligned to building more and more powerful processing roles for small devices, which has in turn translated into a complimentary compute and storage service as one of the cloud deployment scenarios.
Here we have to take into account myriads of edge devices and remote nodes, many with their own computing power and the ability to interact with the physical world around them. The remote edge nodes are taking advantage of the bigger core to further store, analyse the data and activities collected and transmitted by the multitude of connected devices as a whole.
Until recently, edge computing was considered unnecessary. However, implementation realities requiring fast communication, economics of backhauling massive amounts of collated data from swarms of edge nodes at remote gateways and cloud data transfer expenses have led providers to conclude that both centralized cloud and distributed edge have become necessary.
By combining the data-gathering potential of edge computing with the storage capacity and processing power of the cloud, companies can keep their IoT devices running fast and efficient without sacrificing valuable analytical data that could help them drive innovation.
Another use case is the proliferation of content providers and the streaming business. It can be difficult for network infrastructure to keep up with growing consumer demands across geographies for higher resolution video and audio data. The consumption pattern has become more individual centric. Caching static data at the edges, allows content providers to be located closer to end users for quick and speedy access.
Enterprises are using edge topological ideas to cut WAN costs, while improving resiliency and improving user experience. Enterprises are using the edge model more and more to create value from data and observations.
Cloud in the future
The goals driving enterprises to use the concept of cloud for their computing and storage needs are:
- Data volume considerations and bandwidth utilization
- Need for local interactivity and effect of latency on an application
- Need for limited autonomy or disconnected operation
- Privacy and security concerns, sovereignty of data
- Cost of high powered edge devices
Historically, location has never been relevant to cloud computing, although issues related to it are important in many situations. If we take an objective look at each of the above use cases, the applications of the future are veering towards the edge where actual interactions will take place, making location an important consideration.
This has given rise to the concept of operating on a Distributed Cloud. It is by definition, the distribution of public cloud services to different physical locations; while operation, governance, updates and the evolution of the services are the responsibility of the originating public cloud provider. The distributed concept is being extended out beyond regions, into smaller granularities at the edges of consumption with the new offerings like Azure Stack, Oracle Cloud at Customer, AWS Outposts and IBM’s Bluemix local.
Distributed cloud computing may or may not be delivered in a hybrid cloud mode. A major agreement within this concept is that the cloud provider is fully responsible for all aspects of the delivery. This restores cloud value propositions that were broken when customers had to take responsibility for a part of the solution delivery, as is normally true in current day hybrid cloud scenarios. It should be noted that the cloud provider does not need to own the hardware on which the distributed cloud sub-station would be installed, but the provider must take the entire responsibility of how the hardware is maintained and administered.
Distributed cloud supports tethered and untethered operation of like-for-like cloud services from the public cloud distributed out to specific and varied physical locations. This enables low-latency compute where the compute operations for the cloud services are closer to those who need the capabilities. This can deliver major improvements in performance and reduce the risk of global network-related outages.
Distributed hybrid cloud approaches will greatly simplify the management and operations of the empowered edge. The ability to perform local processing of complex data streams will mean that systems don’t need to transmit data to remote cloud vendors, possibly into other regions. As for data sovereignty pre-conditions, distributed cloud provides enterprises the capabilities of a public cloud delivered in a physical location that meets requirements.
How can Enquero help
Enquero nurtures the capability to help clients in their transformation into the digital realm in multiple ways that lead into the cloud. As more and more business use cases rely on the need of analytics, collecting and processing huge amounts of data on a regular basis to support business processes and expand into multiple global locations, public cloud must be leveraged and adopted to solve common business problems in the near future. As development partners, Enquero would be at the forefront and equipped to take the opportunity of leading its clients into this new endeavour.
Subscribe to blog