Kubernetes networking is the system by which different components of a Kubernetes cluster communicate with each other. It involves the routing and management of network traffic between nodes, pods, services, and external resources. Kubernetes networking enables the seamless operation of distributed applications and services within the cluster, allowing them to communicate and collaborate with each other. It is a key component of Kubernetes that helps ensure the scalability, reliability, and performance of containerized applications.
Services
In Kubernetes, a Service is an abstraction that defines a logical set of pods and a policy by which to access them. It provides a stable IP address and DNS name for a set of pods, and allows them to be accessed by other components within the cluster, or from outside the cluster if the Service is exposed externally.
Essentially, a Service acts as a load balancer for a group of pods, distributing traffic evenly among them and ensuring that requests are routed to a healthy pod.
Here's an example YAML definition for a simple Service that exposes a deployment of nginx pods:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
In this example, the Service is named nginx-service
, and it targets pods with the label app: nginx
. It exposes port 80 of the pods as a Service port, and specifies a target port of 80 for the pods.
Once the Service is created, other components within the cluster can access the nginx pods using the DNS name nginx-service
and port 80. For example, a deployment could be defined to use the nginx pods by specifying the nginx-service
as the target in its pod template.
Overall, Services are a key component of Kubernetes networking, allowing pods to be accessed and load balanced in a reliable and scalable way.
Ingress
In Kubernetes, an Ingress is a resource that acts as an entry point for external traffic into a cluster, providing a way to route traffic to different services based on the requested hostname or URL path.
Essentially, an Ingress is a layer 7 load balancer that sits in front of one or more services and intelligently routes incoming traffic to the appropriate service. It can also provide features such as SSL termination, virtual hosting, and session affinity.
For example, let's say you have two services running in your Kubernetes cluster, a web application and an API. You could create an Ingress that maps incoming traffic to the /api path to the API service, and all other traffic to the web application service. Then, external requests to the /api path would be automatically routed to the API service, and all other requests would go to the web application service.
Overall, Ingress is a powerful feature of Kubernetes that simplifies external access to multiple services within a cluster, and helps improve the scalability and availability of your applications.
here's an example YAML definition for an Ingress resource in Kubernetes:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /webapp
pathType: Prefix
backend:
service:
name: webapp-service
port:
name: http
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
name: http
In this example, the Ingress is named my-ingress
and has annotations that apply to the Nginx Ingress controller. It defines two rules for handling incoming HTTP requests: one for requests to example.com/webapp
and one for requests to example.com/api
. Both rules use a pathType
of Prefix
, which means that any request with a URL that starts with the specified path should be routed to the specified backend service.
The backend for the webapp
rule is a service named webapp-service
with a port named http
, while the backend for the api
rule is a service named api-service
with a port named http
.
This example is just a basic template, and the specific details of the Ingress resource will depend on your application and infrastructure requirements.
Network Policies
In Kubernetes, Network Policies are used to define how groups of pods are allowed to communicate with each other, and can be used to enforce network segmentation and access control within the cluster.
A Network Policy is a Kubernetes resource that defines a set of rules for incoming and outgoing traffic to and from pods. These rules can be based on various criteria, such as the source and destination pods, IP addresses, ports, and protocols.
Here's an example YAML definition for a simple Network Policy that allows traffic between two pods based on their labels:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-nginx-to-redis
spec:
podSelector:
matchLabels:
app: nginx
ingress:
- from:
- podSelector:
matchLabels:
app: redis
ports:
- protocol: TCP
port: 6379
In this example, the Network Policy is named allow-nginx-to-redis
, and it specifies that traffic should be allowed to flow from pods with the app: redis
label to pods with the app: nginx
label over port 6379. The podSelector
field is used to specify which pods the Network Policy applies to.
Once the Network Policy is created, only traffic matching the specified criteria will be allowed to flow between the pods. Any traffic that does not meet the criteria will be blocked.
To apply the Network Policy, follow these steps:
Copy the YAML definition to a file named
network-policy.yaml
.Open a terminal or command prompt and navigate to the directory where the file is located.
Run the following command to create the Network Policy in your cluster:
DNS
In Kubernetes, DNS (Domain Name System) is used to provide a reliable and scalable way to resolve the IP addresses of services and pods in the cluster. DNS is an essential component of Kubernetes networking, as it enables services and pods to communicate with each other using human-readable names instead of IP addresses.
When a service is created in Kubernetes, a DNS record is automatically created for it, using the format [service-name].[namespace].svc.cluster.local
. This DNS record can be used to resolve the IP addresses of the pods associated with the service.
Here's an example of how you might use DNS in Kubernetes:
Suppose you have a service named web-service
running in the web
namespace, and it's associated with three pods. You can use DNS to resolve the IP addresses of the pods as follows:
Open a terminal and run the following command to open a shell in a pod:
kubectl exec -it [pod-name] -- sh
Replace
[pod-name]
with the name of one of the pods associated with theweb-service
service.Once you're in the shell, run the following command to install the
dnsutils
package, which includes thenslookup
command:
apk add --no-cache dnsutils
Run the following command to resolve the IP address of the web-service
service:
nslookup web-service.web.svc.cluster.local
- This should return the IP addresses of the pods associated with the
web-service
service.
You can use DNS in Kubernetes to resolve the IP addresses of services and pods from within other pods, as well as from outside the cluster (if the Kubernetes cluster is integrated with an external DNS server). DNS can also be used to create custom domain names for services in the cluster, and to provide load balancing and failover for services.
CNI Plugins
CNI (Container Networking Interface) is a specification for defining how networking should work for container runtimes like Kubernetes. CNI plugins are executable programs that implement the CNI specification, and they are responsible for managing the network interfaces of containers running on a host.
In Kubernetes, CNI plugins are used to manage the networking of pods, which are the smallest deployable units in the cluster. When a pod is created, Kubernetes delegates the responsibility of setting up its network to a CNI plugin. The CNI plugin then creates the necessary network interfaces, assigns IP addresses to the pod, and sets up the appropriate routing rules and firewall policies.