The services model in Kubernetes provides an abstraction which defines a logical set of Pods and a policy by which to access them and handles service discovery. Understanding service discovery is key to understanding how an application runs on Kubernetes.

Any API object in Kubernetes, including a node or a pod, may have key-value pairs associated with it .Kubernetes refers to these key-value pairs as labels . Service discovery takes advantage of the labels and selectors to associate a service with a set of pods. Any pod whose labels match the selector defined in the service manifest will automatically be discovered by the service. This how the service knows to which pods it should route the reques to ie which pods are the end points of the service. In a nutshell service identifies its member/end point pods via selectors. In the service yaml file for instance the selector has (key value pair)/label app: service1 .In the pod configuration file we assign the pod certain labels in the meta data section, eg app:service1.Note that all the selctors should match .The targetPort attribute in service manifiest determines the port of the service. Note that Pods in kubernetes are ephemeral. Pods are destroyed and newly created pod will have a different ip address . Service ensures that traffic is routed to appropiate pod within the cluster, regardless of the node within the cluster.

Each service exposes an IP address, and may also expose a DNS endpoint both of which will never change. Internal or external consumers that need to communicate with a set of pods will use the service’s IP address, or its more generally known DNS endpoint . A single well known address behind every service eliminates the need for any service discovery logic on the client side. Hence this setup looks very similar to a loadbalancer sitting infront of a set of VMs. 

Note that that the DNS will resolve to SINGLE cluster IP for the service. DNS roundrobin is not used. This clusterIP is a so-called virtual address. A virtual IP basically means that there is no single network interface in the whole system carrying it around!.

Note that there is actually no hard dependency on DNS for Kubernetes applications as upon a pod startup for every running service Kubernetes injects a couple of env variables looking like _SERVICE_HOST and _SERVICE_PORT. 

Hence the service provides an abstraction providing stable ip address and loadbalancing.  

Kubernetes provides three diffrent service type.

  1. cluster ip - ClusterIP is the default Kubernetes service. Your service will be exposed on a ClusterIP unless you manually define another type.Services are reachable by pods/services inside the Cluster ie A ClusterIP provides network connectivity within your cluster. It can’t normally be accessed from outside the cluster. You use these services for internal networking between your workloads.
  2. Node Port -  Creates a service that is externally accessible from outside the cluster on static port on each worker node in the cluster ie External traffic has access to a fixed port on each worker note. The port is specified via the nodePort attribute. When we create nodePort service a cluster ip service to which nodePort service will route is automatically be created. The service spans all the worker nodes. Note that node port is not very secure and not used in production for external connection.Typicall ingress or load balancer will be used which will route to clusterIP address.
  3. Load balancer - The service becomes avaliable externally through cloud providers load balancer. If the service type is chosen as load balancer then cloud providers load balancer will be used.Note that nodeport and cluster ip services are automatically created. Load balancer is extension of nodeport which is extension of cluster ip service.

Headless services (clusterIP:none)

If the client wants to communicate with one of the pods directly without going through service then headless service can be used. Typically used for stateful applications like database. Essentiall in such applications the pod replicas are not identical. Each one has its own state. Eg master node and read replica are not the same. client may want to talk to master node.