Cloud-Native Network Function
   HOME
*





Cloud-Native Network Function
A Cloud-Native Network Function (CNF) is a software-implementation of a function, or application, traditionally performed on a physical device, but which runs inside Linux containers (typically orchestrated by Kubernetes). The features that differ CNFs from VNFs (Virtualized Network Functions), one of the components of Network Function Virtualization, is the approach in their orchestration. In ETSI NFV standards, the Cloud-Native Network Functions are a particular type of Virtualized Network Functions and are orchestrated as VNFs, i.e. using the ETSI NFV MANO architecture and technology-agnostic descriptors (e.g. TOSCA, YANG). In that case, the upper layers of the ETSI NFV MANO architecture (i.e. the NFVO and VNFM) cooperate with a Container Infrastructure Service Management (CISM) function that is typically implemented using cloud-native orchestration solutions (e.g. Kubernetes). The characteristics of Cloud-Native Functions are: * Containerized microservices that communica ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Linux Containers
OS-level virtualization is an operating system (OS) paradigm in which the kernel allows the existence of multiple isolated user space instances, called ''containers'' ( LXC, Solaris containers, Docker, Podman), ''zones'' (Solaris containers), ''virtual private servers'' (OpenVZ), ''partitions'', ''virtual environments'' (VEs), ''virtual kernels'' (DragonFly BSD), or ''jails'' (FreeBSD jail or chroot jail). Such instances may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer. However, programs running inside of a container can only see the container's contents and devices assigned to the container. On Unix-like operating systems, this feature can be seen as an advanced implementation of the standard chroot mechanism, which changes the apparent root folder ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Kubernetes
Kubernetes (, commonly stylized as K8s) is an open-source container orchestration system for automating software deployment, scaling, and management. Google originally designed Kubernetes, but the Cloud Native Computing Foundation now maintains the project. Kubernetes works with Containerd, and CRI-O. Originally, it interfaced exclusively with the Docker runtime through a "Dockershim"; however, from November 2020 up to April 2022, Kubernetes has deprecated the shim in favor of directly interfacing with the container through Containerd, or replacing Docker with a runtime that is compliant with the Container Runtime Interface (CRI). With the release of v1.24 in May 2022, "Dockershim" has been removed entirely. History Kubernetes ( κυβερνήτης, Greek for "helmsman," "pilot," or "governor", and the etymological root of cybernetics) was announced by Google in mid-2014. The project was created by Joe Beda, Brendan Burns, and Craig McLuckie, who were soon joined by other ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Network Function Virtualization
Network functions virtualization (NFV) is a network architecture concept that leverages the IT virtualization technologies to virtualize entire classes of network node functions into building blocks that may connect, or chain together, to create and deliver communication services. NFV relies upon traditional server-virtualization techniques such as those used in enterprise IT. A virtualized network function, or VNF, is implemented within one or more virtual machines or containers running different software and processes, on top of commercial off the shelf (COTS) high-volume servers, switches and storage devices, or even cloud computing infrastructure, instead of having custom hardware appliances for each network function thereby avoiding vendor lock-in. For example, a virtual session border controller could be deployed to protect a network without the typical cost and complexity of obtaining and installing physical network protection units. Other examples of NFV include virtualized ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Microservices
A microservice architecture – a variant of the service-oriented architecture structural style – is an architectural pattern that arranges an application as a collection of loosely-coupled, fine-grained services, communicating through lightweight protocols. One of its goals is that teams can develop and deploy their services independently of others. This is achieved by the reduction of several dependencies in the code base, allowing for developers to evolve their services with limited restrictions from users, and for additional complexity to be hidden from users. As a consequence, organizations are able to develop software with fast growth and size, as well as use off-the-shelf services more easily. Communication requirements are reduced. These benefits come at a cost to maintaining the decoupling. Interfaces need to be designed carefully and treated as a public API. One technique that is used is having multiple interfaces on the same service, or multiple versions of the sam ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Scalability
Scalability is the property of a system to handle a growing amount of work by adding resources to the system. In an economic context, a scalable business model implies that a company can increase sales given increased resources. For example, a package delivery system is scalable because more packages can be delivered by adding more delivery vehicles. However, if all packages had to first pass through a single warehouse for sorting, the system would not be as scalable, because one warehouse can handle only a limited number of packages. In computing, scalability is a characteristic of computers, networks, algorithms, networking protocols, programs and applications. An example is a search engine, which must support increasing numbers of users, and the number of topics it indexes. Webscale is a computer architectural approach that brings the capabilities of large-scale cloud computing companies into enterprise data centers. In mathematics, scalability mostly refers to closure u ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Guest Operating System
Hardware virtualization is the virtualization of computers as complete hardware platforms, certain logical abstractions of their componentry, or only the functionality required to run various operating systems. Virtualization hides the physical characteristics of a computing platform from the users, presenting instead an abstract computing platform. At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time. Concept The term "virtualization" was coined in the 1960s to refer to a virtual machine (sometimes called "pseudo machine"), a term which itself dates from the experimental IBM M44/44X system. The creation and management of virtual machines has been called "platform virtualization", or "server virtualization", more recently. Platform virtualization is performed on a given hardware platform by ''host'' software (a ''control program''), which creates a simul ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Docker (software)
Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called ''containers''. The service has both free and premium tiers. The software that hosts the containers is called Docker Engine. It was first started in 2013 and is developed by Docker, Inc. Background Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. Because all of the containers share the services of a single operating system kernel, they use fewer resources than virtual machines. Operation Docker can package an application and its dependencies in a virtual container that can run on any Linux, Windows, or macOS computer. This enables the application to run in a variety of locations, such as on-premises, in public (see decentralized computing, distributed computing, and cloud computing) or private cloud. When running on Linux, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Continuous Delivery
Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time and, following a pipeline through a "production-like environment", without doing so manually. It aims at building, testing, and releasing software with greater speed and frequency. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery. Continuous delivery contrasts with continuous deployment Continuous deployment (CD) is a software engineering approach in which software functionalities are delivered frequently and through automated deployments. Continuous deployment contrasts with continuous delivery (also abbreviated CD), a similar ... (also abbreviated CD), a similar approach in which software is also produced in short cycles bu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


DevOps
DevOps is a set of practices that combines software development (''Dev'') and IT operations (''Ops''). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary to agile software development; several DevOps aspects came from the ''agile'' way of working. Definition Other than it being a cross-functional combination (and a portmanteau) of the terms and concepts for "development" and "operations", academics and practitioners have not developed a universal definition for the term "DevOps". Most often, DevOps is characterized by key principles: shared ownership, workflow automation, and rapid feedback. From an academic perspective, Len Bass, Ingo Weber, and Liming Zhu—three computer science researchers from the CSIRO and the Software Engineering Institute—suggested defining DevOps as "a set of practices intended to reduce the time between committing a change to a system and the change being placed i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Service Discovery
Service discovery is the process of automatically detecting devices and services on a computer network. This reduces the need for manual configuration by users and administrators. A service discovery protocol (SDP) is a network protocol that helps accomplish service discovery. Service discovery aims to reduce the configuration efforts required by users and administrators. Service discovery requires a common language to allow software agents to make use of one another's services without the need for continuous user intervention. Protocols There are many service discovery protocols, including: * Bluetooth Service Discovery Protocol (SDP) * DNS Service Discovery (DNS-SD), a component of zero-configuration networking * DNS, as used for example in Kubernetes * Dynamic Host Configuration Protocol (DHCP) * Internet Storage Name Service (iSNS) * Jini for Java objects. * Lightweight Service Discovery (LSD), for mobile ad hoc networks * Link Layer Discovery Protocol (LLDP) standards- ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linux Foundation
The Linux Foundation (LF) is a non-profit technology consortium founded in 2000 as a merger between Open Source Development Labs and the Free Standards Group to standardize Linux, support its growth, and promote its commercial adoption. Additionally, it hosts and promotes the collaborative development of open source software projects. It is a major force in promoting diversity and inclusion in both Linux and the wider open source software community. The foundation was launched in 2000, under the Open Source Development Labs (OSDL) and became the organization it is today when OSDL merged with the Free Standards Group (FSG). The Linux Foundation sponsors the work of Linux creator Linus Torvalds and lead maintainer Greg Kroah-Hartman. Furthermore, it is supported by members, such as AT&T, Cisco, Fujitsu, Google, Hitachi, Huawei, IBM, Intel, Meta, Microsoft, NEC, Oracle, Orange S.A., Qualcomm, Samsung, Tencent, and VMware, as well as developers from around the world. In recent y ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]