Getting Started

As part of this getting started guide, you will:

  • Grant your email address access to the cluster.
  • Install kty on your cluster.
  • Test it out!

You can run kty outside of a cluster, but it is a little more complex to get setup because of permissions. Check out the off-cluster instructions for more details.

Prerequisites

  • Get a k8s cluster. k3d is a convenient way to get a cluster up and running fast. Follow their installation instructions and create a default cluster.

Install the CLI

Binares are available on the releases page. If you have homebrew, you can install with:

brew install grampelberg/kty/kty

ssh is all you need to use kty. The CLI is optional, but makes installation and management a little easier.

Setup Permissions

Just like kubectl, kty delegates authorization to Kubernetes RBAC. The email address you use to login needs to be granted access to the cluster. If your organization already uses email addresses for access, you can skip the install step - make sure you verify you’ve got access though.

To grant access to your email, you can use the kty CLI. It will apply a ClusteerRoleBinding that associates your email address with the provided role. In this example, we’re using cluster-admin because it is everywhere and an easy way to get started. You can change this to another role, anything with the minimum permissions in fact. Run:

kty users grant cluster-admin me@example.com

If you’d like to verify the YAML or apply it youself, pass -o yaml to the command.

You can verify that this worked and you have the minimum permissions required by running:

kty users check me@example.com

This command runs a SelfSubjectAccessReview against your cluster. If you want to use kubectl instead, you can run:

kubectl auth can-i list pods --as me@example.com

Install on your cluster

While it isn’t required that you run kty on your cluster, it takes care of the dependencies. To install the server and associated configuration such as ClusterRole and Service resources, run:

kubectl create ns kty && \
  kty resources install -n kty

You can pass --dry-run to see what’s happening or pass the output to kubectl apply -f - yourself. If you’d like to install using a different method, such as helm, see the installation instructions.

To clean these resources up, you can run kty resources delete -n kty.

Verify that this is up and running successfully by checking that the pod has started up:

kubectl -n kty rollout status deploy server

Once the pod has started up, you’ll want to make sure there’s an IP address that can be reached. This will also add cluster.kty.dev to your /etc/hosts file so that it is easy to get to the server in the future.

kubectl -n kty get service server --output=jsonpath='{.status.loadBalancer.ingress[0].ip}' \
  | awk '{print $1 " cluster.kty.dev"}' \
  | sudo tee -a /etc/hosts
⚠️

If the load balancer isn’t getting a public IP address, there’s other ways to connect to the server. You can port forward using kubectl:

kubectl -n kty port-forward service/server 2222:2222

This is not something you’d want to do in production, but it’s a quick way to see what you can do with kty.

Test it out

ssh -p 2222 me@cluster.kty.dev

Next Steps

  • Exec into a pod. You can use VI keybindings to navigate and enter to select the pod you want. Go to the Shell tab and you’ll be able to pick the command to exec and then be shell’d into the pod directly.

  • scp some files out of a container:

    scp -P 2222 me@cluster.kty.dev:/default/my-pod/etc/hosts /tmp
  • Check out usage for more information on everything that you can do.

  • You can bring your own provider if the default provider don’t work for you. This would be a great way to configure groups or use other types of logins if your organization uses something other than Google or Github.