1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163
|
/*
* Berkeley Lab Checkpoint/Restart (BLCR) for Linux is Copyright (c)
* 2003, The Regents of the University of California, through Lawrence
* Berkeley National Laboratory (subject to receipt of any required
* approvals from the U.S. Dept. of Energy). All rights reserved.
*
* Portions may be copyrighted by others, as may be noted in specific
* copyright notices within specific files.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* $Id: cr_task.c,v 1.21.6.1 2009/03/06 19:26:28 phargrov Exp $
*/
#include "cr_module.h"
// As a simple convention, the routines here which start w/ '__'
// assume you hold the proper locks, while others are to be called
// without holding the locks.
// List of all tasks for which the C/R module has information.
// This is a place to keep all the data that one would consider
// adding to (struct task_struct) if C/R were a patch rather
// than a module.
// TODO: use a hash rather than a list.
LIST_HEAD(cr_task_list);
// Read/write spinlock to protect cr_task_list.
//
// This lock nests OUTSIDE the kernel's tasklist_lock.
//
// This lock nests OUTSIDE any request-specific locks which
// are linked off the cr_task_t.
CR_DEFINE_RWLOCK(cr_task_lock);
// __cr_task_get(task, create)
//
// Finds an entry for the given task is one exists.
// If (create != 0) will create one when none exists.
//
// Note that even if (create != 0) we could return NULL if
// unable to allocate the memory. This is because calling
// cr_kmem_cache_zalloc() w/ other than GFP_ATOMIC might sleep while
// holding the spinlock. Since we cannot know in this routine what
// other locks the caller might hold, it is the caller's problem
// to deal with a NULL return.
// XXX: can we fix this problem?
//
// XXX: This could be implemented w/ a hash rather than a linear search.
//
// Must be called w/ cr_task_lock held (for writing if create != 0).
cr_task_t *__cr_task_get(struct task_struct *task, int create)
{
cr_task_t *cr_task;
list_for_each_entry(cr_task, &cr_task_list, task_list) {
if (cr_task->task == task) {
atomic_inc(&cr_task->ref_count);
return cr_task;
}
}
cr_task = NULL;
if (create) {
#if CRI_DEBUG
if (!CR_MODULE_GET()) {
CR_ERR("Checkpoint API call after rmmod!");
cr_task = ERR_PTR(-EINVAL);
goto out;
}
#endif
cr_task = cr_kmem_cache_zalloc(*cr_task, cr_task_cachep, GFP_ATOMIC);
if (cr_task) {
atomic_set(&cr_task->ref_count, 1);
cr_task->task = task;
cr_task->fd = -1;
cr_task->self_exec_id = task->self_exec_id;
INIT_LIST_HEAD(&cr_task->req_list);
INIT_LIST_HEAD(&cr_task->proc_req_list);
get_task_struct(task);
list_add_tail(&cr_task->task_list, &cr_task_list);
#if CRI_DEBUG
CR_KTRACE_REFCNT("Alloc cr_task_t %p for pid %d", cr_task, task->pid);
} else {
CR_MODULE_PUT();
#endif
}
}
#if CRI_DEBUG
out:
#endif
return cr_task;
}
// __cr_task_put(cr_task)
//
// Drop one reference to a cr_task.
// If this is the last reference then free the resources.
//
// Must be called w/ cr_task_lock held for writing.
void __cr_task_put(cr_task_t *cr_task)
{
CRI_ASSERT(atomic_read(&cr_task->ref_count));
if (atomic_dec_and_test(&cr_task->ref_count)) {
list_del(&cr_task->task_list);
put_task_struct(cr_task->task);
kmem_cache_free(cr_task_cachep, cr_task);
#if CRI_DEBUG
CR_MODULE_PUT();
CR_KTRACE_REFCNT("Free cr_task_t %p", cr_task);
#endif
} else if (atomic_read(&cr_task->ref_count) <= 0) {
CR_WARN("%s [%d]: WARNING: Unbalanced __cr_task_put on cr_task_t %p", __FUNCTION__, current->pid, cr_task);
}
}
// cr_task_get(task)
//
// Routine to find the request corresponding to a given task
// Called when a task begins checkpointing itself.
// Can also be called to see if any request is outstanding for
// the given task.
//
// Called w/o holding the cr_task_lock.
cr_task_t *cr_task_get(struct task_struct *task)
{
cr_task_t *cr_task;
read_lock(&cr_task_lock);
cr_task = __cr_task_get(task, 0);
read_unlock(&cr_task_lock);
return cr_task;
}
// cr_task_put(cr_task)
//
// Drop one reference to a cr_task.
// If this is the last reference then free the resources.
//
// Called w/o holding the cr_task_lock.
void cr_task_put (cr_task_t *cr_task)
{
write_lock(&cr_task_lock);
__cr_task_put(cr_task);
write_unlock(&cr_task_lock);
}
|