Kube Resource Orchestrator (kro)

Kube Resource Orchestrator (kro)

Kube Resource Orchestrator (kro) is an open-source, Kubernetes-native framework designed to streamline the creation and management of complex Kubernetes resource configurations. By allowing operators to group related resources into reusable units, kro simplifies deployments and enhances manageability across diverse environments.

What is kro?

kro enables platform and DevOps teams to define custom Kubernetes APIs using straightforward configurations. These custom APIs, known as ResourceGraphDefinitions (RGDs), encapsulate a set of Kubernetes resources and the logical operations between them. This abstraction allows developers to deploy and manage applications as single units, without delving into the intricacies of individual resource configurations.

Key Features

  • ResourceGraphDefinition (RGD): Acts as a blueprint for creating new Kubernetes APIs that deploy multiple resources together. RGDs define the schema, resources, dependencies, conditions, and status for the grouped resources. 
  • Common Expression Language (CEL): kro leverages CEL for logical operations, enabling value passing between resources and incorporating conditionals into custom API definitions. This facilitates dynamic configurations based on resource states.
  • Dependency Management: By treating resources as a Directed Acyclic Graph (DAG), kro determines the correct order for resource creation, ensuring that dependencies are respected and resources are provisioned in the appropriate sequence.
  • Custom Resource Definitions (CRDs): Upon applying an RGD, kro generates and registers a new CRD in the cluster, providing a simplified API for developers to interact with complex resource groupings.

Benefits

  • Simplified Developer Experience: Developers can deploy applications using custom APIs without needing deep Kubernetes expertise, as the underlying complexity is abstracted away.
  • Standardization: Platform teams can enforce organizational standards and best practices by defining RGDs, ensuring consistent deployments across environments.
  • Collaboration: Security, compliance, and platform teams can collaborate to create RGDs that encapsulate necessary policies and configurations, promoting a unified approach to resource management.
  • Cloud-Agnostic: kro supports integration with various cloud providers, allowing for the orchestration of both Kubernetes-native and external cloud resources within the same framework.

Getting Started

  1. Install kro: Set up kro in your Kubernetes cluster

    Fetch the latest release version from GitHub

    export KRO_VERSION=$(curl -sL \
        https://api.github.com/repos/kro-run/kro/releases/latest | \
        jq -r '.tag_name | ltrimstr("v")'
      )
    

    Validate KRO_VERSION populated with a version

    echo $KRO_VERSION
    

    Install kro using Helm

    helm install kro oci://ghcr.io/kro-run/kro/kro \
      --namespace kro \
      --create-namespace \
      --version=${KRO_VERSION}
    
  2. Define an RGD: Create a ResourceGraphDefinition that specifies the desired resources, their configurations, and dependencies.

    resourcegraphdefinition.yaml

    apiVersion: kro.run/v1alpha1
    kind: ResourceGraphDefinition
    metadata:
      name: my-application
    spec:
      # kro uses this simple schema to create your CRD schema and apply it
      # The schema defines what users can provide when they instantiate the RGD (create an instance).
      schema:
        apiVersion: v1alpha1
        kind: Application
        spec:
          # Spec fields that users can provide.
          name: string
          image: string | default="nginx"
          ingress:
            enabled: boolean | default=false
        status:
          # Fields the controller will inject into instances status.
          deploymentConditions: ${deployment.status.conditions}
          availableReplicas: ${deployment.status.availableReplicas}
    
      # Define the resources this API will manage.
      resources:
        - id: deployment
          template:
            apiVersion: apps/v1
            kind: Deployment
            metadata:
              name: ${schema.spec.name} # Use the name provided by user
            spec:
              replicas: 3
              selector:
                matchLabels:
                  app: ${schema.spec.name}
              template:
                metadata:
                  labels:
                    app: ${schema.spec.name}
                spec:
                  containers:
                    - name: ${schema.spec.name}
                      image: ${schema.spec.image} # Use the image provided by user
                      ports:
                        - containerPort: 80
    
        - id: service
          template:
            apiVersion: v1
            kind: Service
            metadata:
              name: ${schema.spec.name}-service
            spec:
              selector: ${deployment.spec.selector.matchLabels} # Use the deployment selector
              ports:
                - protocol: TCP
                  port: 80
                  targetPort: 80
    
        - id: ingress
          includeWhen:
            - ${schema.spec.ingress.enabled} # Only include if the user wants to create an Ingress
          template:
            apiVersion: networking.k8s.io/v1
            kind: Ingress
            metadata:
              name: ${schema.spec.name}-ingress
              annotations:
                kubernetes.io/ingress.class: alb
                alb.ingress.kubernetes.io/scheme: internet-facing
                alb.ingress.kubernetes.io/target-type: ip
                alb.ingress.kubernetes.io/healthcheck-path: /health
                alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
                alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60
            spec:
              rules:
                - http:
                    paths:
                      - path: "/"
                        pathType: Prefix
                        backend:
                          service:
                            name: ${service.metadata.name} # Use the service name
                            port:
                              number: 80
    
  3. Apply the RGD: Deploy the RGD to your cluster, which will generate the corresponding CRD.

    kubectl apply -f resourcegraphdefinition.yaml
    

    Check the status of the resources created by the ResourceGraphDefinition using the kubectl command:

    kubectl get rgd my-application -owide
    

    You should see the ResourceGraphDefinition in the Active state, along with relevant information to help you understand your application:

    NAME             APIVERSION   KIND          STATE    TOPOLOGICALORDER                     
    AGE
    my-application   v1alpha1     Application   Active   ["deployment","service","ingress"]
    
  4. Instantiate Resources: Use the new custom API to create instances of your defined resource groupings.

    Create a new file named instance.yaml with the following content:

    apiVersion: kro.run/v1alpha1
    kind: Application
    metadata:
      name: my-application-instance
    spec:
      name: my-awesome-app
      ingress:
        enabled: true
    

    Use the kubectl command to deploy the Application instance to your Kubernetes cluster:

    kubectl apply -f instance.yaml
    

    Check the status of the resources

    kubectl get applications
    

    After a few seconds, you should see the Application instance in the Active state:

    NAME                      STATE    SYNCED   AGE
    my-application-instance   ACTIVE   True     10s
    

    Check the resources created by the Application instance:

    kubectl get deployments,services,ingresses
    

    You should see the DeploymentService, and Ingress created by the Application instance.

    NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/my-awesome-app   3/3     3            3           69s
    
    NAME                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    service/my-awesome-app-service   ClusterIP   10.100.167.72   <none>        80/TCP    65s
    
    NAME                                               CLASS    HOSTS   ADDRESS   PORTS   AGE
    ingress.networking.k8s.io/my-awesome-app-ingress   <none>   *                 80      62s
    

For detailed instructions and examples, refer to the official documentation at kro.run.

Community and Contributions

kro is a collaborative project supported by major cloud providers, including Google Cloud, AWS, and Azure. The project welcomes contributions from the community. You can participate by submitting issues, contributing code, or joining discussions on the GitHub repository and the Kubernetes Slack channel #kro

By providing a Kubernetes-native, cloud-agnostic approach to resource orchestration, kro empowers teams to manage complex deployments more efficiently, fostering innovation and consistency across development and operations.