1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109
|
/*
* Copyright (c) 1997-8,2007,11,19-21 Andrew G Morgan <morgan@kernel.org>
*
* This file deals with getting and setting capabilities on processes.
*/
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#include <errno.h>
#include <fcntl.h> /* Obtain O_* constant definitions */
#include <grp.h>
#include <sys/prctl.h>
#include <sys/securebits.h>
#include <sys/syscall.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#include "libcap.h"
/*
* libcap uses this abstraction for all system calls that change
* kernel managed capability state. This permits the user to redirect
* it for testing and also to better implement posix semantics when
* using pthreads.
*/
static long int _cap_syscall3(long int syscall_nr,
long int arg1, long int arg2, long int arg3)
{
return syscall(syscall_nr, arg1, arg2, arg3);
}
static long int _cap_syscall6(long int syscall_nr,
long int arg1, long int arg2, long int arg3,
long int arg4, long int arg5, long int arg6)
{
return syscall(syscall_nr, arg1, arg2, arg3, arg4, arg5, arg6);
}
/*
* to keep the structure of the code conceptually similar in C and Go
* implementations, we introduce this abstraction for invoking state
* writing system calls. In psx+pthreaded code, the fork
* implementation provided by nptl ensures that we can consistently
* use the multithreaded syscalls even in the child after a fork().
*/
struct syscaller_s {
long int (*three)(long int syscall_nr,
long int arg1, long int arg2, long int arg3);
long int (*six)(long int syscall_nr,
long int arg1, long int arg2, long int arg3,
long int arg4, long int arg5, long int arg6);
};
/* use this syscaller for multi-threaded code */
static struct syscaller_s multithread = {
.three = _cap_syscall3,
.six = _cap_syscall6
};
/* use this syscaller for single-threaded code */
static struct syscaller_s singlethread = {
.three = _cap_syscall3,
.six = _cap_syscall6
};
/*
* This gets reset to 0 if we are *not* linked with libpsx.
*/
__attribute__((visibility ("hidden"))) int _libcap_overrode_syscalls = 1;
/*
* cap_set_syscall overrides the state setting syscalls that libcap does.
* Generally, you don't need to call this manually: libcap tries hard to
* set things up appropriately.
*/
void cap_set_syscall(long int (*new_syscall)(long int,
long int, long int, long int),
long int (*new_syscall6)(long int, long int,
long int, long int,
long int, long int,
long int)) {
if (new_syscall == NULL) {
psx_load_syscalls(&multithread.three, &multithread.six);
} else {
multithread.three = new_syscall;
multithread.six = new_syscall6;
}
}
static int _libcap_capset(struct syscaller_s *sc,
cap_user_header_t header, const cap_user_data_t data)
{
if (_libcap_overrode_syscalls) {
return sc->three(SYS_capset, (long int) header, (long int) data, 0);
}
return capset(header, data);
}
static int _libcap_wprctl3(struct syscaller_s *sc,
long int pr_cmd, long int arg1, long int arg2)
{
if (_libcap_overrode_syscalls) {
int result;
result = sc->three(SYS_prctl, pr_cmd, arg1, arg2);
if (result >= 0) {
return result;
}
errno = -result;
return -1;
}
return prctl(pr_cmd, arg1, arg2, 0, 0, 0);
}
static int _libcap_wprctl6(struct syscaller_s *sc,
long int pr_cmd, long int arg1, long int arg2,
long int arg3, long int arg4, long int arg5)
{
if (_libcap_overrode_syscalls) {
int result;
result = sc->six(SYS_prctl, pr_cmd, arg1, arg2, arg3, arg4, arg5);
if (result >= 0) {
return result;
}
errno = -result;
return -1;
}
return prctl(pr_cmd, arg1, arg2, arg3, arg4, arg5);
}
/*
* cap_get_proc obtains the capability set for the current process.
*/
cap_t cap_get_proc(void)
{
cap_t result;
/* allocate a new capability set */
result = cap_init();
if (result) {
_cap_debug("getting current process' capabilities");
/* fill the capability sets via a system call */
if (capget(&result->head, &result->u[0].set)) {
cap_free(result);
result = NULL;
}
}
return result;
}
static int _cap_set_proc(struct syscaller_s *sc, cap_t cap_d) {
int retval;
if (!good_cap_t(cap_d)) {
errno = EINVAL;
return -1;
}
_cap_debug("setting process capabilities");
_cap_mu_lock(&cap_d->mutex);
retval = _libcap_capset(sc, &cap_d->head, &cap_d->u[0].set);
_cap_mu_unlock(&cap_d->mutex);
return retval;
}
int cap_set_proc(cap_t cap_d)
{
return _cap_set_proc(&multithread, cap_d);
}
/* the following two functions are not required by POSIX */
/* read the caps on a specific process */
int capgetp(pid_t pid, cap_t cap_d)
{
int error;
if (!good_cap_t(cap_d)) {
errno = EINVAL;
return -1;
}
_cap_debug("getting process capabilities for proc %d", pid);
_cap_mu_lock(&cap_d->mutex);
cap_d->head.pid = pid;
error = capget(&cap_d->head, &cap_d->u[0].set);
cap_d->head.pid = 0;
_cap_mu_unlock(&cap_d->mutex);
return error;
}
/* allocate space for and return capabilities of target process */
cap_t cap_get_pid(pid_t pid)
{
cap_t result;
result = cap_init();
if (result) {
if (capgetp(pid, result) != 0) {
int my_errno;
my_errno = errno;
cap_free(result);
errno = my_errno;
result = NULL;
}
}
return result;
}
/*
* set the caps on a specific process/pg etc.. The kernel has long
* since deprecated this asynchronous interface. DON'T EXPECT THIS TO
* EVER WORK AGAIN.
*/
int capsetp(pid_t pid, cap_t cap_d)
{
int error;
if (!good_cap_t(cap_d)) {
errno = EINVAL;
return -1;
}
_cap_debug("setting process capabilities for proc %d", pid);
_cap_mu_lock(&cap_d->mutex);
cap_d->head.pid = pid;
error = capset(&cap_d->head, &cap_d->u[0].set);
cap_d->head.version = _LIBCAP_CAPABILITY_VERSION;
cap_d->head.pid = 0;
_cap_mu_unlock(&cap_d->mutex);
return error;
}
/* the kernel api requires unsigned long arguments */
#define pr_arg(x) ((unsigned long) x)
/* get a capability from the bounding set */
int cap_get_bound(cap_value_t cap)
{
return prctl(PR_CAPBSET_READ, pr_arg(cap), pr_arg(0));
}
static int _cap_drop_bound(struct syscaller_s *sc, cap_value_t cap)
{
return _libcap_wprctl3(sc, PR_CAPBSET_DROP, pr_arg(cap), pr_arg(0));
}
/* drop a capability from the bounding set */
int cap_drop_bound(cap_value_t cap) {
return _cap_drop_bound(&multithread, cap);
}
/* get a capability from the ambient set */
int cap_get_ambient(cap_value_t cap)
{
int result;
result = prctl(PR_CAP_AMBIENT, pr_arg(PR_CAP_AMBIENT_IS_SET),
pr_arg(cap), pr_arg(0), pr_arg(0));
if (result < 0) {
errno = -result;
return -1;
}
return result;
}
static int _cap_set_ambient(struct syscaller_s *sc,
cap_value_t cap, cap_flag_value_t set)
{
int val;
switch (set) {
case CAP_SET:
val = PR_CAP_AMBIENT_RAISE;
break;
case CAP_CLEAR:
val = PR_CAP_AMBIENT_LOWER;
break;
default:
errno = EINVAL;
return -1;
}
return _libcap_wprctl6(sc, PR_CAP_AMBIENT, pr_arg(val), pr_arg(cap),
pr_arg(0), pr_arg(0), pr_arg(0));
}
/*
* cap_set_ambient modifies a single ambient capability value.
*/
int cap_set_ambient(cap_value_t cap, cap_flag_value_t set)
{
return _cap_set_ambient(&multithread, cap, set);
}
static int _cap_reset_ambient(struct syscaller_s *sc)
{
int olderrno = errno;
cap_value_t c;
int result = 0;
for (c = 0; !result; c++) {
result = cap_get_ambient(c);
if (result == -1) {
errno = olderrno;
return 0;
}
}
return _libcap_wprctl6(sc, PR_CAP_AMBIENT,
pr_arg(PR_CAP_AMBIENT_CLEAR_ALL),
pr_arg(0), pr_arg(0), pr_arg(0), pr_arg(0));
}
/*
* cap_reset_ambient erases all ambient capabilities - this reads the
* ambient caps before performing the erase to workaround the corner
* case where the set is empty already but the ambient cap API is
* locked.
*/
int cap_reset_ambient(void)
{
return _cap_reset_ambient(&multithread);
}
/*
* Read the security mode of the current process.
*/
unsigned cap_get_secbits(void)
{
return (unsigned) prctl(PR_GET_SECUREBITS, pr_arg(0), pr_arg(0));
}
static int _cap_set_secbits(struct syscaller_s *sc, unsigned bits)
{
return _libcap_wprctl3(sc, PR_SET_SECUREBITS, bits, 0);
}
/*
* Set the secbits of the current process.
*/
int cap_set_secbits(unsigned bits)
{
return _cap_set_secbits(&multithread, bits);
}
/*
* Attempt to raise the no new privs prctl value.
*/
static void _cap_set_no_new_privs(struct syscaller_s *sc)
{
(void) _libcap_wprctl6(sc, PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0, 0);
}
/*
* cap_prctl performs a prctl() 6 argument call on the current
* thread. Use cap_prctlw() if you want to perform a POSIX semantics
* prctl() system call.
*/
int cap_prctl(long int pr_cmd, long int arg1, long int arg2,
long int arg3, long int arg4, long int arg5)
{
return prctl(pr_cmd, arg1, arg2, arg3, arg4, arg5);
}
/*
* cap_prctlw performs a POSIX semantics prctl() call. That is a 6 arg
* prctl() call that executes on all available threads when libpsx is
* linked. The suffix 'w' refers to the fact one only ever needs to
* invoke this is if the call will write some kernel state.
*/
int cap_prctlw(long int pr_cmd, long int arg1, long int arg2,
long int arg3, long int arg4, long int arg5)
{
return _libcap_wprctl6(&multithread, pr_cmd, arg1, arg2, arg3, arg4, arg5);
}
/*
* Some predefined constants
*/
#define CAP_SECURED_BITS_BASIC \
(SECBIT_NOROOT | SECBIT_NOROOT_LOCKED | \
SECBIT_NO_SETUID_FIXUP | SECBIT_NO_SETUID_FIXUP_LOCKED | \
SECBIT_KEEP_CAPS_LOCKED)
#define CAP_SECURED_BITS_AMBIENT (CAP_SECURED_BITS_BASIC | \
SECBIT_NO_CAP_AMBIENT_RAISE | SECBIT_NO_CAP_AMBIENT_RAISE_LOCKED)
static cap_value_t raise_cap_setpcap[] = {CAP_SETPCAP};
static int _cap_set_mode(struct syscaller_s *sc, cap_mode_t flavor)
{
int ret;
unsigned secbits = CAP_SECURED_BITS_AMBIENT;
cap_t working = cap_get_proc();
if (working == NULL) {
_cap_debug("getting current process' capabilities failed");
return -1;
}
ret = cap_set_flag(working, CAP_EFFECTIVE, 1, raise_cap_setpcap, CAP_SET) |
_cap_set_proc(sc, working);
if (ret == 0) {
cap_flag_t c;
switch (flavor) {
case CAP_MODE_NOPRIV:
/* fall through */
case CAP_MODE_PURE1E_INIT:
(void) cap_clear_flag(working, CAP_INHERITABLE);
/* fall through */
case CAP_MODE_PURE1E:
if (!CAP_AMBIENT_SUPPORTED()) {
secbits = CAP_SECURED_BITS_BASIC;
} else {
ret = _cap_reset_ambient(sc);
if (ret) {
break; /* ambient dropping failed */
}
}
ret = _cap_set_secbits(sc, secbits);
if (flavor != CAP_MODE_NOPRIV) {
break;
}
/* just for "case CAP_MODE_NOPRIV:" */
for (c = 0; cap_get_bound(c) >= 0; c++) {
(void) _cap_drop_bound(sc, c);
}
(void) cap_clear_flag(working, CAP_PERMITTED);
/* for good measure */
_cap_set_no_new_privs(sc);
break;
case CAP_MODE_HYBRID:
ret = _cap_set_secbits(sc, 0);
break;
default:
errno = EINVAL;
ret = -1;
break;
}
}
(void) cap_clear_flag(working, CAP_EFFECTIVE);
ret = _cap_set_proc(sc, working) | ret;
(void) cap_free(working);
return ret;
}
/*
* cap_set_mode locks the overarching capability framework of the
* present process and thus its children to a predefined flavor. Once
* set, these modes cannot be undone by the affected process tree and
* can only be done by "cap_setpcap" permitted processes. Note, a side
* effect of this function, whether it succeeds or fails, is to clear
* at least the CAP_EFFECTIVE flags for the current process.
*/
int cap_set_mode(cap_mode_t flavor)
{
return _cap_set_mode(&multithread, flavor);
}
/*
* cap_get_mode attempts to determine what the current capability mode
* is. If it can find no match in the libcap pre-defined modes, it
* returns CAP_MODE_UNCERTAIN.
*/
cap_mode_t cap_get_mode(void)
{
unsigned secbits = cap_get_secbits();
if (secbits == 0) {
return CAP_MODE_HYBRID;
}
if ((secbits & CAP_SECURED_BITS_BASIC) != CAP_SECURED_BITS_BASIC) {
return CAP_MODE_UNCERTAIN;
}
/* validate ambient is not set */
int olderrno = errno;
int ret = 0, cf;
cap_value_t c;
for (c = 0; !ret; c++) {
ret = cap_get_ambient(c);
if (ret == -1) {
errno = olderrno;
if (c && secbits != CAP_SECURED_BITS_AMBIENT) {
return CAP_MODE_UNCERTAIN;
}
ret = 0;
break;
}
if (ret) {
return CAP_MODE_UNCERTAIN;
}
}
/*
* Explore how capabilities differ from empty.
*/
cap_t working = cap_get_proc();
cap_t empty = cap_init();
if (working == NULL || empty == NULL) {
_cap_debug("working=%p, empty=%p - need both non-NULL", working, empty);
ret = -1;
} else {
cf = cap_compare(empty, working);
}
cap_free(empty);
cap_free(working);
if (ret != 0) {
return CAP_MODE_UNCERTAIN;
}
if (CAP_DIFFERS(cf, CAP_INHERITABLE)) {
return CAP_MODE_PURE1E;
}
if (CAP_DIFFERS(cf, CAP_PERMITTED) || CAP_DIFFERS(cf, CAP_EFFECTIVE)) {
return CAP_MODE_PURE1E_INIT;
}
for (c = 0; ; c++) {
int v = cap_get_bound(c);
if (v == -1) {
break;
}
if (v) {
return CAP_MODE_PURE1E_INIT;
}
}
return CAP_MODE_NOPRIV;
}
static int _cap_setuid(struct syscaller_s *sc, uid_t uid)
{
const cap_value_t raise_cap_setuid[] = {CAP_SETUID};
cap_t working = cap_get_proc();
if (working == NULL) {
return -1;
}
(void) cap_set_flag(working, CAP_EFFECTIVE,
1, raise_cap_setuid, CAP_SET);
/*
* Note, we are cognizant of not using glibc's setuid in the case
* that we've modified the way libcap is doing setting
* syscalls. This is because prctl needs to be working in a POSIX
* compliant way for the code below to work, so we are either
* all-broken or not-broken and don't allow for "sort of working".
*/
(void) _libcap_wprctl3(sc, PR_SET_KEEPCAPS, 1, 0);
int ret = _cap_set_proc(sc, working);
if (ret == 0) {
if (_libcap_overrode_syscalls) {
ret = sc->three(SYS_setuid, (long int) uid, 0, 0);
if (ret < 0) {
errno = -ret;
ret = -1;
}
} else {
ret = setuid(uid);
}
}
int olderrno = errno;
(void) _libcap_wprctl3(sc, PR_SET_KEEPCAPS, 0, 0);
(void) cap_clear_flag(working, CAP_EFFECTIVE);
(void) _cap_set_proc(sc, working);
(void) cap_free(working);
errno = olderrno;
return ret;
}
/*
* cap_setuid attempts to set the uid of the process without dropping
* any permitted capabilities in the process. A side effect of a call
* to this function is that the effective set will be cleared by the
* time the function returns.
*/
int cap_setuid(uid_t uid)
{
return _cap_setuid(&multithread, uid);
}
#if defined(__arm__) || defined(__i386__) || \
defined(__i486__) || defined(__i586__) || defined(__i686__)
#define sys_setgroups_variant SYS_setgroups32
#else
#define sys_setgroups_variant SYS_setgroups
#endif
static int _cap_setgroups(struct syscaller_s *sc,
gid_t gid, size_t ngroups, const gid_t groups[])
{
const cap_value_t raise_cap_setgid[] = {CAP_SETGID};
cap_t working = cap_get_proc();
if (working == NULL) {
return -1;
}
(void) cap_set_flag(working, CAP_EFFECTIVE,
1, raise_cap_setgid, CAP_SET);
/*
* Note, we are cognizant of not using glibc's setgid etc in the
* case that we've modified the way libcap is doing setting
* syscalls. This is because prctl needs to be working in a POSIX
* compliant way for the other functions of this file so we are
* all-broken or not-broken and don't allow for "sort of working".
*/
int ret = _cap_set_proc(sc, working);
if (_libcap_overrode_syscalls) {
if (ret == 0) {
ret = sc->three(SYS_setgid, (long int) gid, 0, 0);
}
if (ret == 0) {
ret = sc->three(sys_setgroups_variant, (long int) ngroups,
(long int) groups, 0);
}
if (ret < 0) {
errno = -ret;
ret = -1;
}
} else {
if (ret == 0) {
ret = setgid(gid);
}
if (ret == 0) {
ret = setgroups(ngroups, groups);
}
}
int olderrno = errno;
(void) cap_clear_flag(working, CAP_EFFECTIVE);
(void) _cap_set_proc(sc, working);
(void) cap_free(working);
errno = olderrno;
return ret;
}
/*
* cap_setgroups combines setting the gid with changing the set of
* supplemental groups for a user into one call that raises the needed
* capabilities to do it for the duration of the call. A side effect
* of a call to this function is that the effective set will be
* cleared by the time the function returns.
*/
int cap_setgroups(gid_t gid, size_t ngroups, const gid_t groups[])
{
return _cap_setgroups(&multithread, gid, ngroups, groups);
}
/*
* cap_iab_get_proc returns a cap_iab_t value initialized by the
* current process state related to these iab bits.
*/
cap_iab_t cap_iab_get_proc(void)
{
cap_iab_t iab;
cap_t current;
iab = cap_iab_init();
if (iab == NULL) {
_cap_debug("no memory for IAB tuple");
return NULL;
}
current = cap_get_proc();
if (current == NULL) {
_cap_debug("no memory for cap_t");
cap_free(iab);
return NULL;
}
cap_iab_fill(iab, CAP_IAB_INH, current, CAP_INHERITABLE);
cap_free(current);
cap_value_t c;
for (c = cap_max_bits(); c; ) {
--c;
int o = c >> 5;
__u32 mask = 1U << (c & 31);
if (cap_get_bound(c) == 0) {
iab->nb[o] |= mask;
}
if (cap_get_ambient(c) == 1) {
iab->a[o] |= mask;
}
}
return iab;
}
/*
* _cap_iab_set_proc sets the iab collection using the requested
* syscaller. The iab value is locked by the caller. Note, if needed,
* CAP_SETPCAP will be raised in the Effective flag of the process
* internally to the function for the duration of the function call.
*/
static int _cap_iab_set_proc(struct syscaller_s *sc, cap_iab_t iab)
{
int ret, i, raising = 0, check_bound = 0;
cap_value_t c;
cap_t working, temp = cap_get_proc();
if (temp == NULL) {
return -1;
}
for (i = 0; i < _LIBCAP_CAPABILITY_U32S; i++) {
__u32 newI = iab->i[i];
__u32 oldIP = temp->u[i].flat[CAP_INHERITABLE] |
temp->u[i].flat[CAP_PERMITTED];
raising |= newI & ~oldIP;
if (iab->nb[i]) {
check_bound = 1;
}
temp->u[i].flat[CAP_INHERITABLE] = newI;
}
if (check_bound) {
check_bound = 0;
for (c = cap_max_bits(); c-- != 0; ) {
unsigned offset = c >> 5;
__u32 mask = 1U << (c & 31);
if ((iab->nb[offset] & mask) && cap_get_bound(c)) {
/* Requesting a change of bounding set. */
raising = 1;
check_bound = 1;
break;
}
}
}
working = cap_dup(temp);
if (working == NULL) {
ret = -1;
goto defer;
}
if (raising) {
ret = cap_set_flag(working, CAP_EFFECTIVE,
1, raise_cap_setpcap, CAP_SET);
if (ret) {
goto defer;
}
}
if ((ret = _cap_set_proc(sc, working))) {
goto defer;
}
if ((ret = _cap_reset_ambient(sc))) {
goto done;
}
for (c = cap_max_bits(); c-- != 0; ) {
unsigned offset = c >> 5;
__u32 mask = 1U << (c & 31);
if (iab->a[offset] & mask) {
ret = _cap_set_ambient(sc, c, CAP_SET);
if (ret) {
goto done;
}
}
if (check_bound && (iab->nb[offset] & mask)) {
/* drop the bounding bit */
ret = _cap_drop_bound(sc, c);
if (ret) {
goto done;
}
}
}
done:
(void) cap_set_proc(temp);
defer:
cap_free(working);
cap_free(temp);
return ret;
}
/*
* cap_iab_set_proc sets the iab capability vectors of the current
* process.
*/
int cap_iab_set_proc(cap_iab_t iab)
{
int retval;
if (!good_cap_iab_t(iab)) {
errno = EINVAL;
return -1;
}
_cap_mu_lock(&iab->mutex);
retval = _cap_iab_set_proc(&multithread, iab);
_cap_mu_unlock(&iab->mutex);
return retval;
}
/*
* cap_launcher_callback primes the launcher with a callback that will
* be invoked after the fork() but before any privilege has changed
* and before the execve(). This can be used to augment the state of
* the child process within the cap_launch() process. You can cancel
* any callback associated with a launcher by calling this function
* with a callback_fn value NULL.
*
* If the callback function returns anything other than 0, it is
* considered to have failed and the launch will be aborted - further,
* errno will be communicated to the parent.
*/
int cap_launcher_callback(cap_launch_t attr, int (callback_fn)(void *detail))
{
if (!good_cap_launch_t(attr)) {
errno = EINVAL;
return -1;
}
_cap_mu_lock(&attr->mutex);
attr->custom_setup_fn = callback_fn;
_cap_mu_unlock(&attr->mutex);
return 0;
}
/*
* cap_launcher_setuid primes the launcher to attempt a change of uid.
*/
int cap_launcher_setuid(cap_launch_t attr, uid_t uid)
{
if (!good_cap_launch_t(attr)) {
errno = EINVAL;
return -1;
}
_cap_mu_lock(&attr->mutex);
attr->uid = uid;
attr->change_uids = 1;
_cap_mu_unlock(&attr->mutex);
return 0;
}
/*
* cap_launcher_setgroups primes the launcher to attempt a change of
* gid and groups.
*/
int cap_launcher_setgroups(cap_launch_t attr, gid_t gid,
int ngroups, const gid_t *groups)
{
if (!good_cap_launch_t(attr)) {
errno = EINVAL;
return -1;
}
_cap_mu_lock(&attr->mutex);
attr->gid = gid;
attr->ngroups = ngroups;
attr->groups = groups;
attr->change_gids = 1;
_cap_mu_unlock(&attr->mutex);
return 0;
}
/*
* cap_launcher_set_mode primes the launcher to attempt a change of
* mode.
*/
int cap_launcher_set_mode(cap_launch_t attr, cap_mode_t flavor)
{
if (!good_cap_launch_t(attr)) {
errno = EINVAL;
return -1;
}
_cap_mu_lock(&attr->mutex);
attr->mode = flavor;
attr->change_mode = 1;
_cap_mu_unlock(&attr->mutex);
return 0;
}
/*
* cap_launcher_set_iab primes the launcher to attempt to change the
* IAB values of the launched child. The launcher locks iab while it
* is owned by the launcher: this prevents the user from
* asynchronously changing its value while it is associated with the
* launcher.
*/
cap_iab_t cap_launcher_set_iab(cap_launch_t attr, cap_iab_t iab)
{
if (!good_cap_launch_t(attr)) {
errno = EINVAL;
return NULL;
}
_cap_mu_lock(&attr->mutex);
cap_iab_t old = attr->iab;
attr->iab = iab;
if (old != NULL) {
_cap_mu_unlock(&old->mutex);
}
if (iab != NULL) {
_cap_mu_lock(&iab->mutex);
}
_cap_mu_unlock(&attr->mutex);
return old;
}
/*
* cap_launcher_set_chroot sets the intended chroot for the launched
* child.
*/
int cap_launcher_set_chroot(cap_launch_t attr, const char *chroot)
{
if (!good_cap_launch_t(attr)) {
errno = EINVAL;
return -1;
}
_cap_mu_lock(&attr->mutex);
attr->chroot = _libcap_strdup(chroot);
_cap_mu_unlock(&attr->mutex);
return 0;
}
static int _cap_chroot(struct syscaller_s *sc, const char *root)
{
const cap_value_t raise_cap_sys_chroot[] = {CAP_SYS_CHROOT};
cap_t working = cap_get_proc();
if (working == NULL) {
return -1;
}
(void) cap_set_flag(working, CAP_EFFECTIVE,
1, raise_cap_sys_chroot, CAP_SET);
int ret = _cap_set_proc(sc, working);
if (ret == 0) {
if (_libcap_overrode_syscalls) {
ret = sc->three(SYS_chroot, (long int) root, 0, 0);
if (ret < 0) {
errno = -ret;
ret = -1;
}
} else {
ret = chroot(root);
}
if (ret == 0) {
ret = chdir("/");
}
}
int olderrno = errno;
(void) cap_clear_flag(working, CAP_EFFECTIVE);
(void) _cap_set_proc(sc, working);
(void) cap_free(working);
errno = olderrno;
return ret;
}
/*
* _cap_launch is invoked in the forked child, it cannot return but is
* required to exit, if the execve fails. It will write the errno
* value for any failure over the filedescriptor, fd, and exit with
* status 1.
*/
__attribute__ ((noreturn))
static void _cap_launch(int fd, cap_launch_t attr, void *detail) {
struct syscaller_s *sc = &singlethread;
int my_errno;
if (attr->custom_setup_fn && attr->custom_setup_fn(detail)) {
goto defer;
}
if (attr->arg0 == NULL) {
/* handle the successful cap_func_launcher completion */
exit(0);
}
if (attr->change_uids && _cap_setuid(sc, attr->uid)) {
goto defer;
}
if (attr->change_gids &&
_cap_setgroups(sc, attr->gid, attr->ngroups, attr->groups)) {
goto defer;
}
if (attr->change_mode && _cap_set_mode(sc, attr->mode)) {
goto defer;
}
if (attr->iab && _cap_iab_set_proc(sc, attr->iab)) {
goto defer;
}
if (attr->chroot != NULL && _cap_chroot(sc, attr->chroot)) {
goto defer;
}
/*
* Some type wrangling to work around what the kernel API really
* means: not "const char **".
*/
const void *temp_args = attr->argv;
const void *temp_envp = attr->envp;
execve(attr->arg0, temp_args, temp_envp);
/* if the exec worked, execution will not reach here */
defer:
/*
* getting here means an error has occurred and errno is
* communicated to the parent
*/
my_errno = errno;
for (;;) {
int n = write(fd, &my_errno, sizeof(my_errno));
if (n < 0 && errno == EAGAIN) {
continue;
}
break;
}
close(fd);
exit(1);
}
/*
* cap_launch performs a wrapped fork+(callback and/or exec) that
* works in both an unthreaded environment and also where libcap is
* linked with psx+pthreads. The function supports dropping privilege
* in the forked thread, but retaining privilege in the parent
* thread(s).
*
* When applying the IAB vector inside the fork, since the ambient set
* is fragile with respect to changes in I or P, the function
* carefully orders setting of these inheritable characteristics, to
* make sure they stick.
*
* This function will return an error of -1 setting errno if the
* launch failed.
*/
pid_t cap_launch(cap_launch_t attr, void *detail) {
int my_errno;
int ps[2];
pid_t child;
if (!good_cap_launch_t(attr)) {
errno = EINVAL;
return -1;
}
_cap_mu_lock(&attr->mutex);
/* The launch must have a purpose */
if (attr->custom_setup_fn == NULL &&
(attr->arg0 == NULL || attr->argv == NULL)) {
errno = EINVAL;
_cap_mu_unlock_return(&attr->mutex, -1);
}
if (pipe2(ps, O_CLOEXEC) != 0) {
_cap_mu_unlock_return(&attr->mutex, -1);
}
child = fork();
my_errno = errno;
if (!child) {
close(ps[0]);
prctl(PR_SET_NAME, "cap-launcher", 0, 0, 0);
_cap_launch(ps[1], attr, detail);
/* no return from above function */
}
/* child has its own copy, and parent no longer needs it locked. */
_cap_mu_unlock(&attr->mutex);
close(ps[1]);
if (child < 0) {
goto defer;
}
/*
* Extend this function's return codes to include setup failures
* in the child.
*/
for (;;) {
int ignored;
int n = read(ps[0], &my_errno, sizeof(my_errno));
if (n == 0) {
goto defer;
}
if (n < 0 && errno == EAGAIN) {
continue;
}
waitpid(child, &ignored, 0);
child = -1;
my_errno = ECHILD;
break;
}
defer:
close(ps[0]);
errno = my_errno;
return child;
}
|