We've spun up pods and connected to them individually, but that's frankly not super useful if we want to distribute real traffic across those pods. That's where services come in.
Services provide a stable endpoint for pods. They are an abstraction used to provide a stable endpoint and load balance traffic across a group of Pods. By "stable endpoint", I just mean that the service will always be available at a given URL, even if the pod is destroyed and recreated.
Click to play video
Let's add a service for our 3 synergychat-web pods. If you don't have 3 pods running, edit the deployment to have 3 replicas.
Create a file called web-service.yaml and add the following:
apiVersion: v1kind: Servicemetadata/name: web-service (we could call it anything, but this is a fine name)spec/selector/app: I'm going to let you figure out what should be here. This is how the service knows which pods to route traffic to.spec/ports: An array of port objects. You need one entry:
protocol: TCP (TCP will allow us to use HTTP)port: 80 (this is the port that the service will listen on)targetPort: 8080 (this is the port that the pods are listening on)This creates a new service called web-service with a few properties:
80 for incoming traffic8080app: synergychat-web label selector and automatically add them to its poolCreate the service:
kubectl apply -f web-service.yaml
Now, let's forward the service's port to our local machine so we can test it out.
kubectl port-forward service/web-service 8080:80
Now, if you hit http://localhost:8080 in your browser, you should see the web app! It's better this time around because now our requests are being load-balanced across 3 pods.
Run and submit the CLI tests while forwarding.