1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
|
/*
* $Id: arch_gen_cpu_x86_seg.h,v 1.3 2009-01-23 10:34:53 potyra Exp $
*
* Derived from QEMU sources.
* Modified for FAUmachine by Volkmar Sieh.
*
* Copyright (c) 2005-2009 FAUmachine Team.
* Copyright (c) 2003 Fabrice Bellard.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
* USA
*/
#ifndef __CPU_SEG_H_INCLUDED
#define __CPU_SEG_H_INCLUDED
/*
* This function must always be used to load data in the segment
* cache: it synchronizes the hflags with the segment cache values.
*/
static inline __attribute__((__always_inline__)) void
cpu_x86_load_seg_cache(
int seg_reg,
unsigned int selector,
target_ulong base,
unsigned int limit,
unsigned int flags
)
{
SegmentCache *sc;
unsigned int new_hflags;
sc = &env->segs[seg_reg];
sc->selector = selector;
sc->base = base;
sc->limit = limit;
sc->flags = flags;
/* Update the hidden flags. */
if (seg_reg == R_CS) {
#if CONFIG_CPU >= 80486 && CONFIG_CPU_LM_SUPPORT
if ((env->hflags & HF_LMA_MASK) && (flags & DESC_L_MASK)) {
/* long mode */
env->hflags |= HF_CS32_MASK | HF_SS32_MASK | HF_CS64_MASK;
env->hflags &= ~(HF_ADDSEG_MASK);
} else
#endif
{
/* legacy / compatibility case */
new_hflags = (env->segs[R_CS].flags & DESC_B_MASK)
>> (DESC_B_SHIFT - HF_CS32_SHIFT);
env->hflags = (env->hflags & ~(HF_CS32_MASK | HF_CS64_MASK))
| new_hflags;
}
}
new_hflags = (env->segs[R_SS].flags & DESC_B_MASK)
>> (DESC_B_SHIFT - HF_SS32_SHIFT);
if (env->hflags & HF_CS64_MASK) {
/* zero base assumed for DS, ES and SS in long mode */
} else if (! (env->cr[0] & CPU_CR0_PE_MASK)
|| (env->eflags & CPU_VM_MASK)
|| ! (new_hflags & HF_CS32_MASK)) {
/*
* FIXME: try to avoid this test. The problem comes from the
* fact that is real mode or vm86 mode we only modify the
* 'base' and 'selector' fields of the segment cache to go
* faster. A solution may be to force addseg to one in
* translate-i386.c.
*/
new_hflags |= HF_ADDSEG_MASK;
} else {
new_hflags |= ((env->segs[R_DS].base
| env->segs[R_ES].base
| env->segs[R_SS].base) != 0)
<< HF_ADDSEG_SHIFT;
}
env->hflags = (env->hflags
& ~(HF_SS32_MASK | HF_ADDSEG_MASK))
| new_hflags;
}
/*
* Wrapper, just in case memory mappings must be changed.
*/
static inline void
cpu_x86_set_cpl(int cpl)
{
#if HF_CPL_MASK == 3
env->hflags = (env->hflags & ~HF_CPL_MASK) | cpl;
#else
#error HF_CPL_MASK is hardcoded
#endif
}
#endif /* __CPU_SEG_H_INCLUDED */
|