Service
Expose an application running in your cluster behind a single outward-facing endpoint, even when the workload is split across multiple backends.
The Kubernetes network model enables container networking within a pod and between pods on the same or different nodes.
Figure 1 depicts a cluster with a control plane, couple of nodes (VM or physical) attached to a network, each with pods containing one more containers. In addition, each pod has its own IP address called a pod IP
.
Figure 1. High-level example of a K8s cluster supporting container networking
The other K8s network components shown in figure consist of the following:
Local pod networking
- optional component that enables pod-to-pod communications in the same node. You might know this as a virtual L2 bridge
which is just one possible implementation.
Network plugins
- sets up IP addressing for pods and their containers, and allow pods to communicate even when the source pod and destination pod are running on different nodes. Different network plugins achieve this in different ways with examples including tunneling or IP routing.
In the single node (Node 1) case, you have connectivity between containers running on a single pod such as Pod 1. You also have connectivity between containers running on two or more pods on the same node with the example here being Pod 1 and Pod 7 running on Node 1.
In the multi-node case (Node 1 and Node 2), you have container communications between pods on nodes connected via a network. In the example above, Pod 7 on Node 1 can talk to Pod 21 on Node 2.
The network model describes how pods and their associated pod IPs can integrate with the larger network to support container networking.
Every Pod in your cluster gets its own unique cluster-wide IP address called a pod IP.
If you have deployed an IPv4/IPv6 dual stack cluster, then you - or your network plugin(s) - must allocate pod IPs for IPv4 and IPv6 for each pod. This is performed per address family, with one for IPv4 addresses and one for IPv6 addresses.
Kubernetes imposes the following requirements on any networking implementation (barring any intentional network segmentation policies):
Some platforms, such as Linux and Windows, support pods running in the host network. Pods attached to the host network of a node can still communicate with all pods on all nodes without NAT.
In this example, inter-container connectivity on the same pod makes use of pod IPs, network namespaces and localhost networking.
Pods run in their own network namespace. All the containers in a pod share this network namespace. (Network namespaces are not the same as the Kubernetes namespace concept).
Figure 2 illustrates this example of localhost communications between two containers on the same Pod.
Figure 2. Example of container communications within the same Pod using localhost:port number
Your Pod 1 containers share a network namespace and an IP address. However, you do need to configure separate port numbers for each container.
In this example, inter-pod connectivity on the same node utilizes pod IPs, pod and root network namespaces, veth links, and a virtual L2 bridge.
Pods operate in their own network namespace and have their own IP addresses. You can think of this as multiple distinct IP computers (pods), that wish to talk with each other through a L2 bridge. Inter-pod networking on the same node is no different.
This particular example of inter-pod networking on the same node employs the following:
Root network namespace for the node.
Veth links enabling communication between the pod network namespace and root network namespace.
L2 or L3 networking between the Pods. This example uses a virtual L2 bridge for local pod networking.
Figure 3 illustrates an example of two pods talking to each other on the same node.
Figure 3. Example of Pod 1 and Pod 2 on the same node communicating through a L2 bridge
This example illustrates the use of a virtual overlay network to address your inter-node pod networking requirements. This approach employs virtual tunnels to form a separate virtual overlay network to transport packets from one pod to another across the network.
Figure 4 illustrates the notion of a virtual overlay network. Pods 1, 2 and 3 are networked together through a mesh of virtual tunnels.
Figure 4. Example of a virtual overlay network
Figure 5 shows an example of a virtual overlay network composed of two pods running on separate nodes and connected via a virtual overlay network tunnel. It uses pod and root network namespaces, network plugins and tunnel encapsuation/decapsulation.
Figure 5. Example of Pod 1 - Pod 2 networking on different nodes using a virtual overlay network tunnel
In this example, the network plugin has set up a virtual overlay network. Pod packets sourced on one node and destined for pods on a different node are encapsulated and sent over a tunnel mechanism such as GENEVE.
The destination node retrieves the encapsulated pod packet, strips off the encapsulation header, and then sends it to the target pod, providing that there are no NetworkPolicy restrictions to block that packet.
For more information on two examples of virtual overlay network implementations, see Flannel and Weavenet.
Virtual overlay network - Logical or virtual network of tunnels using encapsulation to connect Pods running on different nodes.
Network plugins set up IP addressing for Pods and their containers, and allow pods to communicate even when the source Pod and destination Pod are running on different nodes. Different network plugins achieve this in different ways with examples including tunneling or IP routing.
L2 bridge - a (virtual) layer 2 bridge enabling inter-pod connectivity on the same host.
Encapsulation - Ability to encapsulate L2 or L3 packets belonging to an inner network
with an outer network
header for transport across the outer network
. This forms a tunnel
where the encapsulation function is performed at tunnel ingress and de-encapsulation function is performed at tunnel egress. You can think of Pods on nodes as the inner network
and nodes networked together as the outer network
. IPIP and VXLAN are two examples of encapsulation employed in K8s cluster networks.
Virtual ethernet link (veth) - Virtual link allowing you to convey packets between namespaces.
Network namespace - Form of isolation where resources such as Pods share a network stack. You will also a default root network namespace offering up a network stack for the Pod and node connectivity.
Kubernetes networking addresses four concerns:
The Connecting Applications with Services tutorial lets you learn about Services and Kubernetes networking with a hands-on example.
Cluster Networking explains how to set up networking for your cluster, and also provides an overview of the technologies involved.
Expose an application running in your cluster behind a single outward-facing endpoint, even when the workload is split across multiple backends.
Make your HTTP (or HTTPS) network service available using a protocol-aware configuration mechanism, that understands web concepts like URIs, hostnames, paths, and more. The Ingress concept lets you map traffic to different backends based on rules you define via the Kubernetes API.
In order for an Ingress to work in your cluster, there must be an ingress controller running. You need to select at least one ingress controller and make sure it is set up in your cluster. This page lists common ingress controllers that you can deploy.
The EndpointSlice API is the mechanism that Kubernetes uses to let your Service scale to handle large numbers of backends, and allows the cluster to update its list of healthy backends efficiently.
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), NetworkPolicies allow you to specify rules for traffic flow within your cluster, and also between Pods and the outside world. Your cluster must use a network plugin that supports NetworkPolicy enforcement.
Your workload can discover Services within your cluster using DNS; this page explains how that works.
Kubernetes lets you configure single-stack IPv4 networking, single-stack IPv6 networking, or dual stack networking with both network families active. This page explains how.
Topology Aware Routing provides a mechanism to help keep network traffic within the zone where it originated. Preferring same-zone traffic between Pods in your cluster can help with reliability, performance (network latency and throughput), or cost.
If two Pods in your cluster want to communicate, and both Pods are actually running on the same node, use Service Internal Traffic Policy to keep network traffic within that node. Avoiding a round trip via the cluster network can help with reliability, performance (network latency and throughput), or cost.