1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827
|
/*
* Copyright (c) 1996, 1998, 1999 University of Utah and the Flux Group.
* All rights reserved.
*
* This file is part of the Flux OSKit. The OSKit is free software, also known
* as "open source;" you can redistribute it and/or modify it under the terms
* of the GNU General Public License (GPL), version 2, as published by the Free
* Software Foundation (FSF). To explore alternate licensing terms, contact
* the University of Utah at csl-dist@cs.utah.edu or +1-801-585-3271.
*
* The OSKit is distributed in the hope that it will be useful, but WITHOUT ANY
* WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
* FOR A PARTICULAR PURPOSE. See the GPL for more details. You should have
* received a copy of the GPL along with the OSKit; see the file COPYING. If
* not, write to the FSF, 59 Temple Place #330, Boston, MA 02111-1307, USA.
*/
#ifdef DEFAULT_SCHEDULER
/*
* Simple scheduler. This scheduler implements everything needed for
* the SCHED_FIFO and SCHED_RR policies.
*/
#include <threads/pthread_internal.h>
#include "pthread_signal.h"
/*
* The RunQ is a multilevel queue of doubly linked lists. Use a bitmask
* to indicate whichrunq is non-empty, with the least significant bit
* being the highest priority (cause off ffs).
*/
#define MAXPRI (PRIORITY_MAX + 1)
/*
* These are internal to this file.
*/
static queue_head_t threads_runq[MAXPRI] = { {0} };
static int threads_runq_count = 0;
static oskit_u32_t threads_whichrunq;
/*
* The scheduler lock is used elsewhere. It must always be taken with
* interrupts disabled since it is used within interrupt handlers to
* change the run queue.
*/
pthread_lock_t pthread_sched_lock = PTHREAD_LOCK_INITIALIZER;
extern int ffs();
#ifdef MEASURE
#include <oskit/machine/proc_reg.h>
struct pthread_stats {
int wakeups;
unsigned long wakeup_cycles;
int switches;
unsigned long switch_cycles;
};
static struct pthread_stats stats;
void dump_scheduler_stats();
#endif
/*
* Initialize the runqs to empty.
*/
void
pthread_init_scheduler(void)
{
int i;
for (i = 0; i < MAXPRI; i++)
queue_init(&threads_runq[i]);
#ifdef MEASURE
atexit(dump_scheduler_stats);
#endif
}
/*
* Are there any threads on the runq?
*/
inline int
pthread_runq_empty(void)
{
return (threads_whichrunq == 0);
}
/*
* Get the highest priority scheduled thread.
*/
inline int
pthread_runq_maxprio(void)
{
int prio;
if (pthread_runq_empty())
return -1;
else {
prio = ffs(threads_whichrunq);
return PRIORITY_MAX - (prio - 1);
}
}
/*
* Determine if a pthread is on the runq. Use a separate field
* since using the flags would require locking the thread. Use the
* queue chain pointer instead, setting it to zero when a thread is
* removed from the queue.
*/
inline int
pthread_runq_onrunq(pthread_thread_t *pthread)
{
return (int) pthread->runq.next;
}
/*
* Add and remove threads from the runq. The runq lock should be locked,
* and interrupts disabled.
*/
/*
* Insert at the tail of the runq.
*/
inline void
pthread_runq_insert_tail(pthread_thread_t *pthread)
{
int prio = PRIORITY_MAX - pthread->priority;
queue_head_t *phdr = &threads_runq[prio];
queue_enter(phdr, pthread, pthread_thread_t *, runq);
threads_whichrunq |= (1 << prio);
threads_runq_count++;
}
/*
* Insert at the head of the runq.
*/
inline void
pthread_runq_insert_head(pthread_thread_t *pthread)
{
int prio = PRIORITY_MAX - pthread->priority;
queue_head_t *phdr = &threads_runq[prio];
queue_enter_first(phdr, pthread, pthread_thread_t *, runq);
threads_whichrunq |= (1 << prio);
threads_runq_count++;
}
/*
* Dequeue highest priority pthread.
*/
OSKIT_INLINE pthread_thread_t *
pthread_runq_dequeue(void)
{
int prio = ffs(threads_whichrunq) - 1;
queue_head_t *phdr = &threads_runq[prio];
pthread_thread_t *pnext;
queue_remove_first(phdr, pnext, pthread_thread_t *, runq);
pnext->runq.next = (queue_entry_t) 0;
threads_runq_count--;
if (queue_empty(phdr))
threads_whichrunq &= ~(1 << prio);
return pnext;
}
/*
* Remove an arbitrary thread from the runq.
*/
inline void
pthread_runq_remove(pthread_thread_t *pthread)
{
int prio = PRIORITY_MAX - pthread->priority;
queue_head_t *phdr = &threads_runq[prio];
queue_remove(phdr, pthread, pthread_thread_t *, runq);
pthread->runq.next = (queue_entry_t) 0;
threads_runq_count--;
if (queue_empty(phdr))
threads_whichrunq &= ~(1 << prio);
}
/*
* This handles the details of finding another thread to switch to.
* Return value to indicate whether a switch actually happened.
*/
int
pthread_sched_dispatch(resched_flags_t reason, pthread_thread_t *pnext)
{
pthread_thread_t *pthread = CURPTHREAD();
int preemptable;
#ifdef MEASURE
unsigned long before, after;
#endif
save_preemption_enable(preemptable);
assert_interrupts_disabled();
#ifdef MEASURE
before = get_tsc();
#endif
/*
* Going to muck with the scheduler queues, so take that lock.
*/
pthread_lock(&pthread_sched_lock);
/*
* Decide what to do with the current thread.
*/
switch (reason) {
case RESCHED_USERYIELD:
if (pthread == IDLETHREAD)
panic("pthread_sched_reschedule: Idlethread!\n");
/*
* A user directed yield forces the thread to the back
* of the queue.
*/
pthread_runq_insert_tail(pthread);
break;
case RESCHED_YIELD:
if (pthread == IDLETHREAD)
panic("pthread_sched_reschedule: Idlethread!\n");
/*
* A involuntary yield forces the thread to the front
* of the queue. It will probably be rerun right away!
*/
pthread_runq_insert_head(pthread);
break;
case RESCHED_PREEMPT:
if (pthread == IDLETHREAD)
panic("pthread_sched_reschedule: Idlethread!\n");
/*
* Time based preemption. If the policy is RR, then the
* thread goes to the back of the queue if it has used
* all of its ticks. Otherwise it stays at the head, of
* course.
*
* If the policy is FIFO, this has no real effect, other
* than to possibly allow a higher priority thread to get
* the CPU.
*/
if (pthread->policy == SCHED_RR) {
if (--pthread->ticks == 0) {
pthread_runq_insert_tail(pthread);
pthread->ticks = SCHED_RR_INTERVAL;
}
else
pthread_runq_insert_head(pthread);
}
else if (pthread->policy == SCHED_FIFO)
pthread_runq_insert_head(pthread);
break;
case RESCHED_INTERNAL:
/*
* This will be the idle thread.
*/
break;
default:
/*
* All other rescheduling modes are blocks, and thus ignored.
*/
if (pthread == IDLETHREAD)
panic("pthread_sched_reschedule: Idlethread!\n");
break;
}
/*
* Clear preemption flag
*/
pthread->preempt = PREEMPT_NEEDED = 0;
/*
* Now find a thread to run.
*/
if (pnext) {
/*
* Locked thread was provided, so done with the scheduler.
*/
pthread_unlock(&pthread_sched_lock);
}
else {
if (pthread_runq_empty()) {
#ifdef THREADS_DEBUG0
pthread_lock(&threads_sleepers_lock);
if (! threads_sleepers)
panic("RESCHED: "
"Nothing to run. Might be deadlock!");
pthread_unlock(&threads_sleepers_lock);
#endif
pnext = IDLETHREAD;
}
else
pnext = pthread_runq_dequeue();
pthread_unlock(&pthread_sched_lock);
/*
* Avoid switch into same thread.
*/
if (pnext == pthread) {
pthread_unlock(&pthread->schedlock);
/*
* Deliver pending signals as long as we have the
* the opportunity to doso.
*/
SIGCHECK(pthread);
return 0;
}
pthread_lock(&pnext->schedlock);
}
#ifdef MEASURE
after = get_tsc();
if (after > before) {
stats.switches++;
stats.switch_cycles += (after - before);
}
#endif
/*
* Switch to next thread. The schedlock for the current thread
* is released in the context switch code, while the lock for
* next thread is carried across the context switch and is released
* once the switch is committed.
*/
thread_switch(pnext, &pthread->schedlock, CURPTHREAD());
/* Must set the current thread pointer! */
SETCURPTHREAD(pthread);
/*
* Thread switch is complete. Unlock the schedlock and proceed.
*/
pthread_unlock(&pthread->schedlock);
/*
* Look for exceptions before letting it return to whatever it
* was doing before it yielded.
*/
if (pthread->sleeprec) {
/*
* Thread is returning to an osenv_sleep. The thread must
* be allowed to return, even if it was killed, so the driver
* state can be cleaned up. The death will be noticed later.
*/
goto done;
}
/*
* These flag bits are protected by the pthread lock.
*/
pthread_lock(&pthread->lock);
if ((pthread->flags &
(THREAD_KILLED|THREAD_EXITING)) == THREAD_KILLED) {
/*
* Thread was canceled. Time to actually reap it, but only
* if its not already in the process of exiting.
*
* XXX: The problem is if the process lock is still
* held. Since the rest of the oskit is not set up to
* handle cancelation of I/O operations, let threads
* continue to run until the process lock is released.
* At that point the death will be noticed.
*/
if (! osenv_process_locked()) {
pthread_exit_locked((void *) PTHREAD_CANCELED);
/*
* Never returns
*/
}
else {
pthread_unlock(&pthread->lock);
goto done;
}
}
pthread_unlock(&pthread->lock);
/*
* Look for signals. Deliver them in the context of the thread.
*/
SIGCHECK(pthread);
/* Need to really *restore* the flag since we are in a new context */
PREEMPT_ENABLE = preemptable;
return 1;
done:
/* Need to really *restore* the flag since we are in a new context */
PREEMPT_ENABLE = preemptable;
return 1;
}
/*
* Here starts the external interface functions.
*/
/*
* Generic switch code. Find a new thread to run and switch to it.
*/
int
pthread_sched_reschedule(resched_flags_t reason, pthread_lock_t *plock)
{
pthread_thread_t *pthread = CURPTHREAD();
int enabled, rc;
save_interrupt_enable(enabled);
disable_interrupts();
/*
* Lock the thread schedlock. This provides atomicity with respect
* to the switch, since another CPU will will not be able to switch
* into a thread until it is completely switched out someplace else.
* Once the schedlock is taken, the provided lock can be released,
* since the thread can now be woken up, although it cannot be
* switched into for the reason just given.
*/
assert(plock != &(pthread->lock));
pthread_lock(&(pthread->schedlock));
if (plock)
pthread_unlock(plock);
rc = pthread_sched_dispatch(reason, 0);
restore_interrupt_enable(enabled);
return rc;
}
/*
* Directed switch. The current thread is blocked with the waitstate.
*/
void
pthread_sched_handoff(int waitstate, pthread_thread_t *pnext)
{
pthread_thread_t *pthread = CURPTHREAD();
int enabled;
save_interrupt_enable(enabled);
disable_interrupts();
if (waitstate) {
pthread_lock(&pthread->waitlock);
pthread->waitflags |= waitstate;
pthread_lock(&pthread->schedlock);
pthread_lock(&pnext->schedlock);
pthread_unlock(&pthread->waitlock);
pthread_sched_dispatch(RESCHED_BLOCK, pnext);
}
else {
pthread_lock(&pthread->schedlock);
pthread_lock(&pnext->schedlock);
pthread_sched_dispatch(RESCHED_YIELD, pnext);
}
restore_interrupt_enable(enabled);
}
/*
* Place a thread that was blocked, back on the runq. The pthread lock
* is *not* taken. This routine is called from interrupt level to
* reschedule threads waiting in timed conditions or sleep.
*
* Return a boolean value indicating whether the current thread is still
* the thread that should be running.
*/
int
pthread_sched_setrunnable(pthread_thread_t *pthread)
{
int enabled, resched = 0;
#ifdef MEASURE
unsigned long before, after;
#endif
save_interrupt_enable(enabled);
disable_interrupts();
#ifdef MEASURE
before = get_tsc();
#endif
pthread_lock(&pthread_sched_lock);
if (pthread_runq_onrunq(pthread))
panic("pthread_sched_setrunnable: Already on runQ: 0x%x(%d)",
(int) pthread, pthread->tid);
pthread_runq_insert_tail(pthread);
if (CURPTHREAD()->priority < pthread_runq_maxprio())
resched = PREEMPT_NEEDED = 1;
pthread_unlock(&pthread_sched_lock);
#ifdef MEASURE
after = get_tsc();
if (after > before) {
stats.wakeups++;
stats.wakeup_cycles += (after - before);
}
#endif
restore_interrupt_enable(enabled);
return resched;
}
/*
* Change the state of a thread. In this case, its the priority and
* policy that are possibly being changed. Return an indicator to the
* caller, if the change requires a reschedule.
*
* Currently, the priority inheritance code is not SMP safe. Also, it is
* specific to this scheduler module.
*/
int
pthread_sched_change_state(pthread_thread_t *pthread, int newprio, int policy)
{
int resched = 0;
assert_interrupts_enabled();
disable_interrupts();
pthread_lock(&pthread->lock);
switch (policy) {
case -1:
/* No change specified */
break;
case SCHED_FIFO:
case SCHED_RR:
pthread->policy = policy;
break;
case SCHED_DECAY:
panic("pthread_setschedparam: SCHED_DECAY not supported yet");
break;
default:
panic("pthread_sched_change_state: Bad policy specified");
}
if (pthread->base_priority == newprio)
goto done;
pthread->base_priority = newprio;
#ifdef PRI_INHERIT
if (pthread->base_priority < pthread->priority) {
if (! pthread->inherits_from)
pthread_priority_decreasing_recompute(pthread);
}
else {
pthread->priority = newprio;
if (pthread->waiting_for)
pthread_priority_increasing_recompute(pthread);
if (pthread->inherits_from &&
newprio > pthread->inherits_from->priority) {
pthread->inherits_from = NULL_THREADPTR;
}
}
pthread_lock(&pthread_sched_lock);
#else
pthread_lock(&pthread_sched_lock);
if (pthread_runq_onrunq(pthread)) {
pthread_runq_remove(pthread);
pthread->priority = newprio;
pthread_runq_insert_tail(pthread);
}
else {
/*
* If its not on the runq, the thread priority can
* just be changed since there are no current
* dependencies on it. If however, the current thread
* has its priority changed, a reschedule might be
* necessary.
*/
pthread->priority = newprio;
}
#endif
if (CURPTHREAD()->priority < pthread_runq_maxprio())
resched = PREEMPT_NEEDED = 1;
pthread_unlock(&pthread_sched_lock);
done:
pthread_unlock(&pthread->lock);
enable_interrupts();
return resched;
}
#ifdef PRI_INHERIT
#ifdef SMP
Sorry, this code does not work in SMP mode. No locking at all!
#endif
/*
* This priority inheritance algorithm is based on the paper "Removing
* Priority Inversion from an Operating System," by Steven Sommer.
*/
int threads_priority_debug = 0;
void pthread_priority_decreasing_recompute(pthread_thread_t *pthread);
void pthread_priority_increasing_recompute(pthread_thread_t *pthread);
OSKIT_INLINE void
threads_change_priority(pthread_thread_t *pthread, int newprio)
{
if (pthread_runq_onrunq(pthread)) {
pthread_runq_remove(pthread);
pthread->priority = newprio;
pthread_runq_insert_tail(pthread);
}
else
pthread->priority = newprio;
}
void
pthread_priority_inherit(pthread_thread_t *pwaiting_for)
{
pthread_thread_t *pthread = CURPTHREAD();
int enabled;
save_interrupt_enable(enabled);
disable_interrupts();
/*
* Add blocked thread to targets' list of waiters.
*/
queue_enter(&pwaiting_for->waiters, pthread,
pthread_thread_t *, waiters_chain);
/*
* Set the waiting link so that the chain of waiters
* can be followed when doing the priority transfer.
*/
pthread->waiting_for = pwaiting_for;
pthread_priority_increasing_recompute(pthread);
restore_interrupt_enable(enabled);
}
/*
* Do a transitive priority transfer. Following the waiting_for links,
* transfering higher priority to lower priority threads.
*
* pthread is locked.
*/
void
pthread_priority_increasing_recompute(pthread_thread_t *pthread)
{
pthread_thread_t *pwaiting_for = pthread->waiting_for;
do {
if (pthread->priority <= pwaiting_for->priority)
break;
if (threads_priority_debug)
printf("Inherit: Transferring priority: "
"From %p(%d/%d) to %p(%d/%d)\n",
pthread, (int) pthread->tid, pthread->priority,
pwaiting_for, (int) pwaiting_for->tid,
pwaiting_for->priority);
threads_change_priority(pwaiting_for, pthread->priority);
pwaiting_for->inherits_from = pthread;
pthread = pwaiting_for;
pwaiting_for = pwaiting_for->waiting_for;
} while (pwaiting_for);
}
/*
* Undo the effects of a priority inheritance after a thread unblocks
* another thread.
*/
void
pthread_priority_uninherit(pthread_thread_t *punblocked)
{
pthread_thread_t *pthread = CURPTHREAD();
pthread_thread_t *ptmp1;
int enabled;
save_interrupt_enable(enabled);
disable_interrupts();
/*
* Remove the unblocked thread from the set of threads blocked
* by the current thread.
*/
queue_remove(&pthread->waiters,
punblocked, pthread_thread_t *, waiters_chain);
/*
* The rest of the waiters are now waiting on the unblocked thread
* since it won the prize. Indicate that, and add the entire set
* of threads to the list of threads that punblocked is blocking.
*/
while (! queue_empty(&pthread->waiters)) {
queue_remove_first(&pthread->waiters,
ptmp1, pthread_thread_t *, waiters_chain);
ptmp1->waiting_for = punblocked;
queue_enter(&punblocked->waiters,
ptmp1, pthread_thread_t *, waiters_chain);
}
punblocked->waiting_for = NULL_THREADPTR;
/*
* Now recompute the priorities.
*/
if (pthread->inherits_from)
pthread_priority_decreasing_recompute(pthread);
restore_interrupt_enable(enabled);
}
void
pthread_priority_decreasing_recompute(pthread_thread_t *pthread)
{
int priority = pthread->priority;
pthread_thread_t *pnext = 0, *ptmp;
int maxpri = -1;
/*
* Find the highest priority thread from the set of threads
* waiting on pthread.
*/
queue_iterate(&pthread->waiters, ptmp, pthread_thread_t *, chain) {
if (ptmp->priority > maxpri) {
maxpri = ptmp->priority;
pnext = ptmp;
}
}
/*
* If there is a waiting thread, and its priority is greater
* then this threads' priority, then inherit priority
* from that thread. Otherwise, reset this threads' priority
* back to its base priority since either there is nothing to
* inherit from (empty waiters), or its base priority is better
* than any of the waiting threads.
*/
if (pnext) {
if (threads_priority_debug)
printf("Uninherit: Transferring priority: "
"From %p(%d/%d) to %p(%d/%d)\n",
pnext, (int) pnext->tid, pnext->priority,
pthread, (int) pthread->tid, pthread->priority);
if (pnext->priority > pthread->base_priority) {
threads_change_priority(pthread, pnext->priority);
pthread->inherits_from = pnext;
}
}
else {
if (threads_priority_debug)
printf("Resetting priority back to base: "
"%p(%d) - %d to %d\n",
pthread, (int) pthread->tid,
pthread->priority, pthread->base_priority);
threads_change_priority(pthread, pthread->base_priority);
pthread->inherits_from = 0;
}
/*
* If this threads' priority was lowered, and another thread is
* inheriting from it, must propogate the lowered priority down
* the chain. This would not happen when a thread unblocks another
* thread. It could happen if an external event caused a blocked
* thread to change state, and that thread was donating its priority.
*/
if (pthread->priority < priority) {
if (pthread->waiting_for &&
pthread->waiting_for->inherits_from == pthread) {
pthread_priority_decreasing_recompute(pthread->
waiting_for);
}
}
}
/*
* A thread is killed, but waiting on a thread. Must undo the inheritance.
*
* Interrupts should be disabled, and the thread locked.
*/
void
pthread_priority_kill(pthread_thread_t *pthread)
{
pthread_thread_t *pwaiting_for = pthread->waiting_for;
queue_remove(&pwaiting_for->waiters, pthread,
pthread_thread_t *, waiters_chain);
pthread->waiting_for = NULL_THREADPTR;
if (pwaiting_for->inherits_from == pthread)
pthread_priority_decreasing_recompute(pwaiting_for);
}
#endif
#ifdef MEASURE
void
dump_scheduler_stats()
{
printf("Wakeups: %d\n", stats.wakeups);
printf("Wakeup Cycles %u\n", stats.wakeup_cycles);
printf("Switches: %d\n", stats.switches);
printf("Switch Cycles %u\n", stats.switch_cycles);
}
#endif
#endif /* DEFAULT_SCHEDULER */
|