1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147
|
/*
* Copyright (C) 2004 Jeff Dike (jdike@addtoit.com)
* Licensed under the GPL
*/
#ifndef __SYSDEP_STUB_H
#define __SYSDEP_STUB_H
#include <stddef.h>
#include <asm/ptrace.h>
#include <generated/asm-offsets.h>
#define STUB_MMAP_NR __NR_mmap2
#define MMAP_OFFSET(o) ((o) >> UM_KERN_PAGE_SHIFT)
static __always_inline long stub_syscall0(long syscall)
{
long ret;
__asm__ volatile ("int $0x80" : "=a" (ret) : "0" (syscall)
: "memory");
return ret;
}
static __always_inline long stub_syscall1(long syscall, long arg1)
{
long ret;
__asm__ volatile ("int $0x80" : "=a" (ret) : "0" (syscall), "b" (arg1)
: "memory");
return ret;
}
static __always_inline long stub_syscall2(long syscall, long arg1, long arg2)
{
long ret;
__asm__ volatile ("int $0x80" : "=a" (ret) : "0" (syscall), "b" (arg1),
"c" (arg2)
: "memory");
return ret;
}
static __always_inline long stub_syscall3(long syscall, long arg1, long arg2,
long arg3)
{
long ret;
__asm__ volatile ("int $0x80" : "=a" (ret) : "0" (syscall), "b" (arg1),
"c" (arg2), "d" (arg3)
: "memory");
return ret;
}
static __always_inline long stub_syscall4(long syscall, long arg1, long arg2,
long arg3, long arg4)
{
long ret;
__asm__ volatile ("int $0x80" : "=a" (ret) : "0" (syscall), "b" (arg1),
"c" (arg2), "d" (arg3), "S" (arg4)
: "memory");
return ret;
}
static __always_inline long stub_syscall5(long syscall, long arg1, long arg2,
long arg3, long arg4, long arg5)
{
long ret;
__asm__ volatile ("int $0x80" : "=a" (ret) : "0" (syscall), "b" (arg1),
"c" (arg2), "d" (arg3), "S" (arg4), "D" (arg5)
: "memory");
return ret;
}
static __always_inline long stub_syscall6(long syscall, long arg1, long arg2,
long arg3, long arg4, long arg5,
long arg6)
{
struct syscall_args {
int ebx, ebp;
} args = { arg1, arg6 };
long ret;
__asm__ volatile ("pushl %%ebp;"
"movl 0x4(%%ebx),%%ebp;"
"movl (%%ebx),%%ebx;"
"int $0x80;"
"popl %%ebp"
: "=a" (ret)
: "0" (syscall), "b" (&args),
"c" (arg2), "d" (arg3), "S" (arg4), "D" (arg5)
: "memory");
return ret;
}
static __always_inline void trap_myself(void)
{
__asm("int3");
}
static __always_inline void *get_stub_data(void)
{
unsigned long ret;
asm volatile (
"call _here_%=;"
"_here_%=:"
"popl %0;"
"andl %1, %0 ;"
"addl %2, %0 ;"
: "=a" (ret)
: "g" (~(UM_KERN_PAGE_SIZE - 1)),
"g" (UM_KERN_PAGE_SIZE));
return (void *)ret;
}
#define stub_start(fn) \
asm volatile ( \
"subl %0,%%esp ;" \
"movl %1, %%eax ; " \
"call *%%eax ;" \
:: "i" ((1 + STUB_DATA_PAGES) * UM_KERN_PAGE_SIZE), \
"i" (&fn))
static __always_inline void
stub_seccomp_restore_state(struct stub_data_arch *arch)
{
for (int i = 0; i < sizeof(arch->tls) / sizeof(arch->tls[0]); i++) {
if (arch->sync & (1 << i))
stub_syscall1(__NR_set_thread_area,
(unsigned long) &arch->tls[i]);
}
arch->sync = 0;
}
#endif
|