Caleb Woodbine's blog
âĒ Posts
đ Istio with Sail-Operator
Hi there! Thank you for stopping by.
I'd like to share a tool which I've been using for managing Istio which I find really useful.
đŧ Background
Istio is a popular Kubernetes Service Mesh and Ingress Gateway, which graduated as a CNCF project back in September of 2023 after being donated by Google in 2022 and announced in 2017.
Istio utilises EnvoyProxy to mesh services and also separately to proxy traffic into the cluster. Although, when using the ambient dataplane mode, Istio will use ZTunnel to proxy traffic between services.
In Kubernetes land, operators are a great way to manage resources at a higher level.
In the past, there has been an Istio operator for installing and upgrading Istio via an Istio
resource. However, this has been deprecated and other operators have come up to provide the same functionality. This has been replaced by just using the resource definition to template the Helm charts together.
Recently, some folks in the Istio community who also contribute to OpenShift have brought Sail-Operator.
Sail-Operator brings with it a bunch of useful resources to manage Istio, like Istio
, IstioRevision
, IstioCNI
and ZTunnel
which we will explore here.
đ¤ Why an operator?
General management
Istio is a complex piece of machinery and having a single piece of software which is concerned with installing and managing Istio's lifecycle is really helpful.
Features
With Sail-Operator, multiple copies of Istio can be installed as well as new versions rolled out while maintaining availability.
Personal experience
In the past, when I've installed Istio, I've either had to install using the IstioOperator manifest with istioctl
or using Helm. Both are less desirable to me.
Using istioctl
means to either render the manifests out and commit the result, leading to drift, running against a live cluster or with Helm CLI having to manage a Helm install, all manually.
đ Deploying Sail-Operator
Install with Helm via the CLI:
helm repo add sail-operator https://istio-ecosystem.github.io/sail-operator
helm upgrade --install -n sail-operator --version 1.0.0
or, better yet through FluxCD:
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: sail-operator
namespace: sail-operator
spec:
interval: 1m
url: https://istio-ecosystem.github.io/sail-operator
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: sail-operator
namespace: sail-operator
spec:
interval: 1m
releaseName: sail-operator
chart:
spec:
chart: sail-operator
sourceRef:
kind: HelmRepository
name: sail-operator
version: 1.0.0
values: {}
Once deployed, it should be seen in it's namespace, like so:
kubectl -n sail-operator get deployments,pods
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/sail-operator 1/1 1 1 8h NAME READY STATUS RESTARTS AGE pod/sail-operator-7c756644b6-v2482 1/1 Running 0 8h
Pretty straight forward.
𧰠Installing Istio
With Sail-Operator installed, Istio can now be installed.
Please note: the example configurations in this guide are not intended to be production-ready. Please play around in kind or minikube!
Here, I'm installing Istio using the ambient mode to enable the sidecar-less proxy:
---
apiVersion: sailoperator.io/v1
kind: Istio
metadata:
name: default
spec:
version: v1.24.2
namespace: istio-system
profile: ambient
values:
pilot:
trustedZtunnelNamespace: "ztunnel"
The configuration for spec.values matches up with the Helm charts values for istiod and base. Explore the schema with kubectl explain
:
kubectl explain --recursive istio.spec.values
GROUP: sailoperator.io KIND: Istio VERSION: v1 FIELD: values <Object> DESCRIPTION: Defines the values to be passed to the Helm charts when installing Istio. FIELDS: base <Object> excludedCRDs <[]string> validationCABundle <string> validationURL <string> compatibilityVersion <string> defaultRevision <string> experimental <Object> global <Object> arch <Object> ...
There are several ways you may wish to deploy in a production ready manner, such as but not limited to
- further locking down the SecurityContext
- enabling HorizontalPodAutoscalers; and
- building and using your own signed and verified container images from source
Now it is installed and running:
kubectl -n istio-system get deployments,pods -l app=istiod
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/istiod 1/1 1 1 8h NAME READY STATUS RESTARTS AGE pod/istiod-5f65f886f6-dhtrw 1/1 Running 0 8h
Other components can be installed too like IstioCNI, which is required for ZTunnel to function:
---
apiVersion: sailoperator.io/v1
kind: IstioCNI
metadata:
name: default
spec:
version: v1.24.2
profile: ambient
namespace: istio-cni
and of course ZTunnel itself:
---
apiVersion: sailoperator.io/v1alpha1
kind: ZTunnel
metadata:
name: default
spec:
version: v1.24.2
namespace: ztunnel
profile: ambient
Please note: both IstioCNI and ZTunnel require privileged namespaces PodSecurity labels due to their requirements for particular syscalls and network access.
𧰠Traffic using Gateway-API and Istio
Using Gateway-API's Gateway resource, a public access point can be provisioned like so to listen on HTTP(80):
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: gateway
namespace: istio-ingress
spec:
gatewayClassName: istio
listeners:
- name: default
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
This deploys EnvoyProxy and sets up a Load Balancer Service to serve traffic.
For a traditional Ingress and Istio Gateway setup, see this guide.
đ Taking a look around the cluster
Now that it's all installed and a Gateway is deployed, it can be verified to be working correctly.
See what's running:
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE default httpbin-7f56dc944b-xwntm 1/1 Running 0 8h flux-system helm-controller-5fc6f89467-dd2q9 1/1 Running 0 8h flux-system kustomize-controller-785d866cb7-8jbxj 1/1 Running 0 8h flux-system notification-controller-56776fcb98-wbrwj 1/1 Running 0 8h flux-system source-controller-6cd558bc58-jr95z 1/1 Running 0 8h istio-cni istio-cni-node-69lc7 1/1 Running 0 8h istio-ingress gateway-istio-8565d64577-sh9qf 1/1 Running 0 8h istio-system istiod-5f65f886f6-dhtrw 1/1 Running 0 8h istio-system prometheus-bbc7d764f-6g2ld 2/2 Running 0 8h kiali-operator kiali-operator-6495cf458-229xf 1/1 Running 0 8h kiali kiali-8497d48d48-l7kqf 1/1 Running 0 8h kube-system coredns-7db6d8ff4d-ffp4g 1/1 Running 0 8h kube-system coredns-7db6d8ff4d-szds9 1/1 Running 0 8h kube-system etcd-kind-control-plane 1/1 Running 0 8h kube-system kindnet-8jbs7 1/1 Running 0 8h kube-system kube-apiserver-kind-control-plane 1/1 Running 0 8h kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 8h kube-system kube-proxy-t2hvm 1/1 Running 0 8h kube-system kube-scheduler-kind-control-plane 1/1 Running 0 8h local-path-storage local-path-provisioner-988d74bc-p42wr 1/1 Running 0 8h sail-operator sail-operator-7c756644b6-v2482 1/1 Running 0 8h ztunnel ztunnel-r9f8w 1/1 Running 0 8h
See the namespaces which have ambient enabled:
kubectl get ns -o custom-columns='NAME:.metadata.name,DATAPLANE MODE:.metadata.labels.istio\.io/dataplane-mode'
NAME DATAPLANE MODE default ambient flux-system <none> istio-cni <none> istio-ingress <none> istio-system <none> kiali <none> kiali-operator <none> kube-node-lease <none> kube-public <none> kube-system <none> local-path-storage <none> sail-operator <none> ztunnel <none>
I have only enabled it on default for this example as I've got the sample httpbin deployed there.
Inspecting using Kiali, connectivity is able to be seen here:

All looks good.
đ Closing
I'm really enjoying using Sail-Operator to manage Istio and would recommend it to others.
Some takeaways:
- Sail-Operator makes it super simple to run fast with managing Istio
- Istio is a great solution to it's problem space
- Ambient dataplane mode and Gateway-API both work suitably great with Istio
Thank you for checking this post out. I hope that you found it interesting and informative. If you did, please check out my example repo on GitLab.
Also see: