-
Notifications
You must be signed in to change notification settings - Fork 321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Private cluster in spoke VNET with custom DNS in hub VNET tries to join the private DNS zone, linked to hub VNET, to the spoke one #4841
Comments
@xi4n - this is by design behavior, read hub spoke private aks doco - https://learn.microsoft.com/en-us/azure/aks/private-clusters?tabs=default-basic-networking%2Cazure-portal#hub-and-spoke-with-custom-dns The private DNS zone is linked only to the VNet that the cluster nodes are attached to (3). This means that the private endpoint can only be resolved by hosts in that linked VNet. In scenarios where no custom DNS is configured on the VNet (default), this works without issue as hosts point at 168.63.129.16 for DNS that can resolve records in the private DNS zone because of the link. |
Hey dear @asifkd012020 I can fully understand linking the Private DNS Zone to the spoke VNET being the default behaviour where no custom DNS server is configured. I'm not having a problem with this, but rather with the fact that even when I bring my own Private DNS Zone with another DNS solution, which should already work fine for the cluster, AKS will forcibly try to link it to the spoke VNET. Let me cite the same documentation
In the scenario of a BYO Private DNS Zone, this paragraph is somehow misleading for a first time reader. He would think: Aha if I bring my own Private DNS Zone (as described in the last sentence), AKS would leave the VNET linking work (onto the hub VNET, as described in the first sentence) to me and everything will pass. That is unfortunately not the case, as explained in the OP, AKS will still try to link the BYO Private DNS Zone to the spoke VNET and fail if it can't, which is 1. not necessary and 2. this will leave us with 2 options
In short, in the scenario of a BYO Private DNS Zone, I would expect that AKS leaves the VNET linking work to the user as well or at least make it optional, because after all, if a client brings his own Private DNS Zone, he is supposed to know what he is doing and he works probably with a hub and spoke network / multi-cluster setup. |
@xi4n - yes that's expected behaviour. We have multiple aks clusters on multiple subnets in a spoke vnet. First time it does create a vnet link to the zone, even though we have separate vnet link from the custom dns servers hosted zone that's connected to vwan hub, however, I agree MS has not clearly specified it in documentation. it should be Virtual Network link from the spoke VNet to the AKS private DNS zone is created to ensure that the AKS cluster’s nodes can reliably resolve the private FQDN of the control plane, regardless of external DNS configurations like those in your hub VNet. |
Describe the bug
Let's say I have a classical hub and spoke network topology and I would like to create an AKS private cluster in a subnet of the spoke VNET s. I use a custom DNS server in the hub VNET h and the private DNS zones, by following Azure best practices, are only linked to the hub VNET h, including the
privatelink.<my_region>.azmk8s.io
one, already existing prior to the creation of the cluster. The spoke VNET s is configured to use the custom DNS server in h to resolve DNS requests.Now, when I create the private cluster by providing it the private DNS zone id of
privatelink.<my_region>.azmk8s.io
, grant it the rolePrivate DNS Zone Contributor
on the private DNS zone andContributor
role on the node pool subnets (not on the whole VNET s), which reside all in the VNET s, it'll throw an error during the creation of the cluster with Terraform, which I don't believe is related to Terraform.Expected behavior
The private cluster should accept the hub VNET as single source and solution of DNS when it creates the private endpoint in the spoke VNET, because that's what Azure suggests us do.
Current behavior
The private cluster tries to join the private DNS zone I gave it to the spoke VNET, if it's not already joined. In case it doesn't have enough permissions (because I only gave it permissions on the subnet level not on the VNET level in s), it throws an error.
A workaround
I could link the private DNS zone both to my hub and spoke VNET which would solve the problem during creation, but this is not how it should be, because the spoke VNET link will be never used.
Did I use any preview features?
Some aks preview features are enabled in the subscription of the spoke VNET. However, I was not using any preview features related to this bug, in particular I was not using api server VNET integration and AKS was supposed to create a private endpoint for me to access the control plane.
To Reproduce
You can just use the most recent Terraform
azurerm_kubernetes_cluster
resource, by using 2 user assigned managed identities resp. for the cluster and for kubelet, network is CNI overlay + Cilium. You need to provideEnvironment (please complete the following information):
The text was updated successfully, but these errors were encountered: