diff options
author | Kevin Krakauer <krakauer@google.com> | 2019-04-03 12:59:27 -0700 |
---|---|---|
committer | Shentubot <shentubot@google.com> | 2019-04-03 13:00:34 -0700 |
commit | 82529becaee6f5050cb3ebb4aaa7a798357c1cf1 (patch) | |
tree | f82a017a8b48ed6ee6dfbd78db74a55b3fbd56c5 /test/syscalls | |
parent | c79e81bd27cd9cccddb0cece30bf47efbfca41b7 (diff) |
Fix index out of bounds in tty implementation.
The previous implementation revolved around runes instead of bytes, which caused
weird behavior when converting between the two. For example, peekRune would read
the byte 0xff from a buffer, convert it to a rune, then return it. As rune is an
alias of int32, 0xff was 0-padded to int32(255), which is the hex code point for
?. However, peekRune also returned the length of the byte (1). When calling
utf8.EncodeRune, we only allocated 1 byte, but tried the write the 2-byte
character ?.
tl;dr: I apparently didn't understand runes when I wrote this.
PiperOrigin-RevId: 241789081
Change-Id: I14c788af4d9754973137801500ef6af7ab8a8727
Diffstat (limited to 'test/syscalls')
-rw-r--r-- | test/syscalls/linux/pty.cc | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/test/syscalls/linux/pty.cc b/test/syscalls/linux/pty.cc index 253aa26ba..5b2dc9ccb 100644 --- a/test/syscalls/linux/pty.cc +++ b/test/syscalls/linux/pty.cc @@ -568,6 +568,12 @@ TEST_F(PtyTest, WriteSlaveToMaster) { EXPECT_EQ(memcmp(buf, kExpected, sizeof(kExpected)), 0); } +TEST_F(PtyTest, WriteInvalidUTF8) { + char c = 0xff; + ASSERT_THAT(syscall(__NR_write, master_.get(), &c, sizeof(c)), + SyscallSucceedsWithValue(sizeof(c))); +} + // Both the master and slave report the standard default termios settings. // // Note that TCGETS on the master actually redirects to the slave (see comment |