Where Cloud-Native Comes from and Where it's Going?
Original
- ZenTao 3
- 2022-02-18 10:09:37
- 1310
Cloud-Native, Where Does It From? Where Does It Go?
Cloud-Native is a popular concept in recent years, but few people really know it. This is because Cloud-Native is a concept located at the top layer of the cloud computing technology stack, and the cloud computing technology is inherently complex. So before we talk about Cloud-Native, let's understand where it comes from and where it's going.
Monolithic Service Era
Once upon a time, the business complexity of software processing was low and the general software system was small in size due to the immaturity of software, hardware, and network technology. The computing and storage capacity of a single server was sufficient to support its normal operation. We call this software architecture a Monolithic application.
Source: Henrique
The advantages of monolithic applications are clear architecture, simple deployment and maintenance, and easy testing. With the gradual increase in business complexity, the requirements for computing and storage capabilities are getting higher and higher. Monolithic applications can only solve performance problems by continuously optimizing software algorithms and improving the hardware configuration of a single server. As a result, the operating cost of the entire system became higher and higher, eventually, the continuous capacity expansion can't be achieved.
Cluster Service Era
After the 1980s, when microcomputers became popular, the cost of a single computer dropped rapidly. Facing the expansion issues caused by monolithic applications, there was finally an affordable solution:
Separate and deploy modules for a monolithic application, and use multiple hosts to share performance pressure.
Since different business modules have different requirements for computing and storage, for example, the "data management module" consumes more storage resources, while the "data analysis module" consumes more computing resources. Modules can be deployed to a single or multiple specific performance hosts according to business requirements, and the hosts can be managed uniformly to form a server cluster. We call this software architecture a Cluster application.
Source: Cluster
Cluster application solves the problem of expansion in Monolithic applications. By cooperating with technologies such as load balancing and reverse proxy, it can provide high concurrent access and processing capabilities for ultra-large-scale complex services.
The advantages of Cluster applications are easy expansion, flexible deployment, high system reliability, and strong service carrying capacity.
Since 2010, the world has entered the era of mobile Internet, and the popularity of smartphones has led to explosive growth in the number of Internet users. The rapid rise of online businesses such as e-commerce, social networking, mobile games, and short videos is accompanied by the rapid expansion of the cluster scale of Internet service providers to hundreds or even thousands of servers.
Due to the huge number of servers in large-scale applications, problems arise:
- Uncontrollable factors such as hardware aging and damage, network failures, and power outages occur frequently.
- The expansion or contraction of a business often requires adjustments to the host deployment.
- Slow response time after failure and inability to quickly transfer services to other hosts for operation.
- Server performance is not saturated and cluster resources are idle and underutilized Due to slow business deployment and migration.
As a result, operation and maintenance become more and more difficult.
Cloud Computing Service Era
Cloud computing is based on virtualization technology, which enables business modules to run in a virtual environment that is more controllable and easier to operate and maintain through the virtualization of hardware, operating systems, and networks.
In Cloud-based business implementations, the resource requirement for different modules can be directly targeted to use different Cloud services. For example, if the "data management module" consumes a lot of storage resources, S3 storage and cloud database can be used to host it, and if the "data analysis module" uses a large number of CPUs, more CPUs can be allocated to the virtual system running this module.
The operation and maintenance personnel are separated from the host hardware by running the business as a container and using container orchestration tools to monitor, expand, and shrink it. The operation and maintenance personnel usually only need to manage virtual resources, and the maintenance of the hardware is carried out by professional operators and the data center.
We call applications built on cloud computing virtual resources as Cloud-Native applications.
Source: DataArt
Pros of Cloud-Native applications:
- It can be expanded flexibly according to business requirements, reducing the pressure of equipment cost in the early stage of the business company
- The operation reliability is extremely high since the hardware equipment is in charge of professional operators
- After the containerization orchestration standard is confirmed by Kubernetes, the business containers are managed with the help of orchestration tools, and the operation and maintenance cost is low
Conclusion
Pivotal is the originator of cloud-native applications (acquired by VMware in 2019), and it is a pioneer and pathfinder in cloud-native application architecture by launching the Pivotal Cloud Foundry (cloud-native application platform) and Spring (open-source Java development framework). Matt Stine, Technical Product Manager at Pivotal, wrote an article titled "Migrating to Cloud-Native Application Architecture", which explains in detail the definition and migration guidelines for cloud-native architecture.
Currently, cloud-native is no longer just a concept built on cloud computing, but an integration of a series of advanced concepts and methodologies to guide software companies go through the transition from traditional to cloud-native architectures.
Cloud-Native applications are built on the cloud platform, which is inherently distributed. Based on the distributed characteristics, cloud-native proposes the architectural concept of microservices. Kubernetes is used for unified orchestration and OM management in order to facilitate the deployment and management of microservices and containerized service. Microservices and containerization represent the basic model of cloud-native.
Implementation and continuous delivery of cloud-native applications require closer collaboration between development and operations staff. DevOps concept guides the developers and OM staffs to keep efficient and harmonious collaboration, together with Agile development methodologies help teams achieve their goal of rapid continuous delivery.
In the future, as cloud computing continues to have an in-depth impact on enterprise informatization, based on a more complete cloud computing infrastructure, there will be more service-oriented applications transferred to cloud computing platforms. Various protocol standards for cloud platforms have gradually formed. Cloud-native applications can be seamlessly docked and run, whether they are public clouds of various manufacturers such as AWS, Azure, and Alibaba Cloud, or private cloud platforms built by enterprises. Cloud-native applications will be equipped with the features of elastic expansion, high availability, high fault tolerance, and self-recovery as cloud computing, and provide more stable and reliable services for enterprises.
Click here to learn more about ZenTao Cloud.
Support
- Book a Demo
- Tech Forum
- GitHub
- SourceForge
About Us
- Company
- Privacy Policy
- Term of Use
- Blogs
- Partners
Contact Us
- Leave a Message
- Email Us: [email protected]