Using CoreDNS To Conceal Network Identities Of Services In Istio

A crucial feature of the Istio Service Mesh is that it grants you absolute control over how you want to route traffic to a service. Each service on the Istio service mesh has a unique network identity that it receives from the underlying host, i.e., Kubernetes. For example, a service named foo provisioned in a namespace named bar will have the FQDN (Fully Qualified Domain Name), which also serves as its network identity. Other services within the cluster can use the network identity of foo to send requests to it, which will reach one of the pods executing an instance of the service.
For a service accessible to clients outside the cluster, the clients can use an address that resolves to the IP address of the Istio ingress gateway. After evaluating the request, the gateway will route the request to the destination service, thus abstracting the external client form the network identity of the destination service. For example, as depicted in the following diagram, the client of a hypothetical service, which is addressable at, is oblivious of the network identity of the service within the cluster.
Using CoreDNS To Conceal Network Identities Of Services In Istio
Figure 1 Ingress Gateway acting as NLB
The previous diagram depicts the actual path of a request originating outside the cluster to the Saturn service. The FQDN,, used by the client, resolves to the IP address of the Istio ingress gateway. The gateway interacts with the Istio service registry to direct the incoming request to an instance of the Saturn service. Note that communication between the actual service and the gateway also involves a sidecar proxy, but it is excluded from the diagram for brevity.

The Challenge

Assigning a hostname to an internet-facing microservice brings indirection between the network identity of the service within the cluster and the address on which it is accessible to the external clients. With the indirection, a service can easily change its location (inside and outside the cluster) and its name without affecting its external clients.
The level of flexibility that services offer to external clients is unavailable to the internal clients since service to service communication within the cluster takes place using the network identity of the service. The lack of indirection means that developers can’t rename the services or namespaces without also affecting the clients. Moreover, porting services to infrastructure outside the cluster or moving them to another cluster also affects their name, which in turn requires changes to the clients of the services. Because of such issues, a system with many interconnected microservices needs to establish abstraction between the addresses and identities of services operating within the cluster.


A potential solution for systems that have all their services exposed to the internet via ingress gateway is to use the external endpoints of the services for communication. Using an external endpoint for service to service communication is a bad practice as such communication should still take place within the cluster for performance and security reasons. In networking, this form of communication is known as hairpinning, which in the context of Kubernetes, translates to service to service communication in which requests from a service leave the cluster and then re-enter the cluster to reach the destination service.
Today, we will discuss a potential solution to this problem by configuring the DNS used by Istio and the DNS used by Kubernetes, which is CoreDNS for both the systems. In Istio, you can add custom DNS records to the service registry using the ServiceEntry configuration resource. Envoy uses the service registry of Istio and Kubernetes to detect the location of any service in the cluster. Istio uses a CoreDNS plugin to read the service entries and associate the IP addresses of services to their host addresses. The DNS plugin is deployed to the cluster when you install Istio with the following installation option.
--set istiocoredns.enabled=true
For a cluster that has both the Kubernetes CoreDNS and Istio CoreDNS services running, we can use the approach illustrated in the following diagram to assign a host address to service and use the host address to communicate with it within the cluster.
Using CoreDNS To Conceal Network Identities Of Services In Istio
Figure 2 Resolving IP address of service
Let’s briefly discuss what is happening here. Istio CoreDNS, which is deployed as istiocoredns service in the cluster, registers all the service entries as DNS records. A limitation of the Istio CoreDNS plugin is that it ignores service entries that don’t have an associated IP address (see source). Therefore, our service bar must have a fixed cluster IP address. Since istiocoredns is responsible for service entry DNS records, we configure kube-dns (Kubernetes CoreDNS) to forward all resolve requests for domain names managed by Istio DNS to istiocoredns. Assuming that we want to assign the FQDN thebar.internal to the service bar, the following is how the service foo will communicate with the service bar within the cluster.
  1. A service entry record that maps the FQDN internal to the cluster IP of the bar service is applied to the mesh so that the DNS record is available to the istiocoredns service. Kubernetes DNS service kube-dns is configured to forward any resolve requests for domain name internal to istiocoredns.
  2. Service foo sends a request to the bar service at the address http://thebar.internal.
  3. Envoy sends a resolve request to the kube-dns service to resolve the IP address of the request.
  4. The kube-dns service forwards the request to istiocoredns.
  5. The istiocoredns service returns the IP address of the service to kube-dns.
  6. Envoy uses the resolved IP address to communicate with the service bar.
  7. Envoy sends the request to the bar
Let’s build a simple demo to illustrate this workflow.


Before I start with the demo, I would like to point you to the code repository for this sample which is located here.
The prerequisite of this demo is a Kubernetes cluster with Istio deployed to it. I assume that you are using CoreDNS in both Kubernetes and Istio, which is the default DNS resource in the recent versions of Kubernetes. I am using Docker Desktop for Windows with Kubernetes enabled for local development, but feel free to use whatever makes you happy.
I built a simple Nodejs REST API that returns a list of fruits available in a country based on the country code passed to it as an argument. To test the service on the dev box, execute the following command to create a container and bind it to port 3000 on the localhost.
  1. $ docker run -p 3000:3000 --name fruits-api istiosuccinctly/fruits-api:1.0.0  
Use another terminal instance to send a request to the API to fetch fruits and their prices for the country Australia whose country code is au. Other country codes supported are ind and usa.
  1. $ curl http://localhost:3000/api/fruits/au  

  2. {"nectarine":2.5,"mandarin":2.3,"lemon":1.1,"kiwi":2.6}  
Let’s deploy this service to our cluster now. The first resource we will need is a namespace with the label istio-injection set to enabled so that Istio can inject a sidecar to all the service pods within this namespace. The following listing presents the definition of the namespace named micro-shake-factory.
  1. apiVersion: v1  
  2. kind: Namespace  
  3. metadata:  
  4.   name: micro-shake-factory  
  5.   labels:  
  6.     istio-injection: enabled  
Next, we will create a deployment and a service for the fruits-api service using the following specification.
  1. apiVersion: apps/v1  
  2. kind: Deployment  
  3. metadata:  
  4.   name: fruits-api-deployment-v1  
  5.   namespace: micro-shake-factory  
  6. spec:  
  7.   selector:  
  8.     matchLabels:  
  9.       app: fruits-api  
  10.   replicas: 1  
  11.   minReadySeconds: 1  
  12.   progressDeadlineSeconds: 600  
  13.   template:  
  14.     metadata:  
  15.       labels:  
  16.         app: fruits-api  
  17.         version: "1"  
  18.     spec:  
  19.       containers:  
  20.         - name: fruits-api  
  21.           image: istiosuccinctly/fruits-api:1.0.0  
  22.           imagePullPolicy: IfNotPresent  
  23.           resources:  
  24.             limits:  
  25.               cpu: 1000m  
  26.               memory: 1024Mi  
  27.             requests:  
  28.               cpu: 100m  
  29.               memory: 100Mi  
  30.           ports:  
  31.             - name: http-fruits-api  
  32.               containerPort: 3000  
  33.           env:  
  34.             - name: app_version  
  35.               value: "1"  
  36. ---  
  37. apiVersion: v1  
  38. kind: Service  
  39. metadata:  
  40.   name: fruits-api-service  
  41.   namespace: micro-shake-factory  
  42. spec:  
  43.   selector:  
  44.     app: fruits-api  
  45.   ports:  
  46.     - name: http-fruits-api-service  
  47.       port: 80  
  48.       targetPort: http-fruits-api  
  49.   clusterIP:  
In the previous specification, note that we reserved a cluster IP address by setting the property clusterIP with an IP address that lies within the CIDR range of the cluster. You can check the CIDR range for your cluster in the service-cluster-ip-range property of the kube-apiserver specification.
Finally, we will configure a Virtual Service using the following specification that will route all traffic to the fruits-api-service.
  1. apiVersion:  
  2. kind: VirtualService  
  3. metadata:  
  4.   name: fruits-api-vservice  
  5.   namespace: micro-shake-factory  
  6. spec:  
  7.   hosts:  
  8.     - fruits-api-service  
  9.   http:  
  10.     - route:  
  11.         - destination:  
  12.             host: fruits-api-service  
  13.             port:  
  14.               number: 80  
Let’s combine all the previous specifications in a single file and apply them to the cluster using the following command.
  1. $ kubectl apply -f  
  3. namespace/micro-shake-factory created  
  4. deployment.apps/fruits-api-deployment-v1 created  
  5. service/fruits-api-service created  
  6. created  
We will now install a temporary pod in our cluster and lookup the address of the fruits-api-service by executing the following command.
  1. $ kubectl run dnsutils -it --rm --generator=run-pod/v1 --image=tutum/dnsutils bash  
  3. If you don't see a command prompt, try pressing enter.  
  4. root@dnsutils:/#  
The previous command will open a shell through which we will use dig (Domain Information Groper) to find out the resolved IP address of the FQDN fruits-api-service.micro-shake-factory.svc.cluster.local.
  1. root@dnsutils:/# dig fruits-api-service.micro-shake-factory.svc.cluster.local  
  3. ; <<>> DiG 9.9.5-3ubuntu0.2-Ubuntu <<>> fruits-api-service.micro-shake-factory.svc.cluster.local  
  4. ;; global options: +cmd  
  5. ;; Got answer:  
  6. ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60665  
  7. ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1  
  8. ;; WARNING: recursion requested but not available  
  11. ; EDNS: version: 0, flags:; udp: 4096  
  13. ;fruits-api-service.micro-shake-factory.svc.cluster.local. IN A  
  15. ;; ANSWER SECTION:  
  16. fruits-api-service.micro-shake-factory.svc.cluster.local. 5 IN A  
  18. ;; Query time: 0 msec  
  19. ;; SERVER:  
  20. ;; WHEN: Thu Oct 31 04:32:07 UTC 2019  
  21. ;; MSG SIZE  rcvd: 157  
In the output, notice the IP address received in the ANSWER SECTION, which is the same as the cluster IP that we reserved for the fruits-api-service. Also, note the IP address of the DNS server that was used for the lookup,, which is the cluster IP of the kube-dns DNS server.

Configuring CoreDNS

CoreDNS is configured using a special file called Corefile which is a declaration of plugins that execute in sequence to resolve an FQDN. We will configure the corefile used by kube-dns to route all resolve requests for hostname internal to istiocoredns. First, execute the following command to find the IP address of the istiocoredns service.
  1. $ kubectl get svc/istiocoredns -n istio-system -o jsonpath='{.spec.clusterIP}'  
  3. ''  
We will now edit the corefile of kube-dns in the kube-system namespace. This file is stored as a configmap named coredns in the namespace kube-system. Execute the following command to launch an editor to edit the configmap.
  1. $ kubectl edit configmap/coredns -n kube-system  
Edit the corefile to add (not replace) the following section to the configuration. After saving the changes, any requests to resolve an address with domain name internal will be routed to istiocoredns.
  1. internal:53 {  
  2.    errors  
  3.    cache 30  
  4.    forward . # istio core dns service ip  
  5. }  
Let’s create a service entry that will associate the name my-fruits.internal to the cluster IP of our service.
  1. apiVersion:  
  2. kind: ServiceEntry  
  3. metadata:  
  4.   name: exotic-fruits-service-entry  
  5.   namespace: micro-shake-factory  
  6. spec:  
  7.   hosts:  
  8.     - my-fruits.internal  
  9.   location: MESH_INTERNAL  
  10.   addresses:  
  11.     -  
  12.   endpoints:  
  13.     - address: fruits-api-service.micro-shake-factory.svc.cluster.local  
  14.   resolution: DNS  
Apply the previous configuration to the cluster by executing the following command. 
  1. $ kubectl apply -f  
  3. created  
Finally, we need to ensure that istiocoredns realizes that the hostname internal can be resolved by the plugin istio-coredns-plugin so that it does not fail the resolution request by returning a NXDOMAIN response. The plugin will use the service entry record that we created previously to resolve the IP address of the service. Just like kube-dns, istiocoredns stores the corefile as a configmap. Execute the following command to edit the file. 
  1. $ kubectl edit configmap/coredns -n istio-system  
There are two approaches to editing the corefile of istiocoredns which vary with the version of CoreDNS. If the version of CoreDNS is < 1.4.0 (it will be evident from the file structure, see source), then update the configuration to resemble the following.
  1. Corefile: |  
  2.   .:53 {  
  3.         errors  
  4.         health  
  5.         proxy internal {  
  6.           protocol grpc insecure  
  7.         }  
  8.         proxy global {  
  9.           protocol grpc insecure  
  10.         }  
  11.         prometheus :9153  
  12.         proxy . /etc/resolv.conf  
  13.         cache 30  
  14.         reload  
  15.       }  
If the version of CoreDNS is > 1.4.0, change the configuration to resemble the following.
  1. Corefile: |  
  2.   .:53 {  
  3.         errors  
  4.         health  
  5.         grpc internal  
  6.         grpc global  
  7.         forward . /etc/resolv.conf {  
  8.           except global internal  
  9.         }  
  10.         prometheus :9153  
  11.         cache 30  
  12.         reload  
  13.       }  
Changes to DNS service takes a few minutes to propagate in the cluster. After a few minutes, execute another dig command to resolve the FQDN my-fruits.internal.
  1. root@dnsutils:/# dig my-fruits.internal  
  3. ; <<>> DiG 9.9.5-3ubuntu0.2-Ubuntu <<>> my-fruits.internal    
  4. ;; global options: +cmd    
  5. ;; Got answer:    
  6. ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41374    
  7. ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1    
  8. ;; WARNING: recursion requested but not available    
  11. ; EDNS: version: 0, flags:; udp: 4096    
  12. ;; QUESTION SECTION:    
  13. ;my-fruits.internal.            IN      A    
  15. ;; ANSWER SECTION:    
  16. my-fruits.internal.     30      IN      A    
  18. ;; Query time: 1 msec    
  19. ;; SERVER:    
  20. ;; WHEN: Thu Oct 31 05:53:11 UTC 2019    
  21. ;; MSG SIZE  rcvd: 81  
Note that we again received the cluster IP address of fruits-api-service on resolving the FQDN. The same resolution means that any service inside the cluster can use the FQDN my-fruits.internal to communicate with the service fruits-api-service. With the indirection in place, developers can easily update the name and location of the fruits-api-service without affecting its clients. After the initial setup, you can onboard other services to the mesh as well and use FQDNs with domain name internal to communicate with them by simply adding new service entry configurations.
This article is an extension of a discussion from my upcoming FREE title on Istio. I crammed much knowledge of Istio in a few pages so that you don’t spend weeks learning it. You should subscribe to my blog to not miss out on the launch.