File: TERMINOLOGY

package info (click to toggle)
evms 2.5.2-1.sarge2
  • links: PTS
  • area: main
  • in suites: sarge
  • size: 14,248 kB
  • ctags: 15,488
  • sloc: ansic: 201,340; perl: 12,421; sh: 4,262; makefile: 1,516; yacc: 316; sed: 16
file content (187 lines) | stat: -rw-r--r-- 9,352 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
EVMS Terminology

Because of the different terms used to describe volume management on operating
systems, we developed a set of terms specific to EVMS. This section defines
some general terms used in EVMS and defines the different layers used with EVMS.

===============================================================================
1. General Terms

   The following list defines volume management terms as they relate
   specifically to EVMS.

   Sector

      The lowest level of addressability on a block device. This definition is
      in keeping with the standard meaning found in other management systems.
      In most situations, a sector is 512 bytes.

   Storage Object

      Any memory structure in EVMS that is capable of being a block device. An
      ordered set of sectors.

   Logical Disk

      The ordered set of sectors that represents a physical device. IDE and
      SCSI disks appear as Logical Disks in EVMS.

   Disk Segment

      An ordered set of physically contiguous sectors residing on a logical
      disk or on another disk segment. The general analogy for a segment is to
      a traditional disk partition, such as in DOS or OS/2?.

   Storage Region

      An ordered set of logically contiguous sectors (that are not necessarily
      physically contiguous). The underlying mapping can be to logical disks,
      segments, or other regions. Linux LVM and AIX LVM LVs, as well as MD
      devices, are represented as regions in EVMS.

   Storage Container

      A collection of storage objects. Storage containers provide a re-mapping
      from this collection to a new set of storage objects that the container
      exports. The appropriate analogy for a storage container is to volume
      groups, such as in the AIX? LVM and the Linux LVM. However, EVMS
      containers are not restricted to any one remapping scheme, as is the case
      with volume groups in LVM or AIX. The remapping could be completely
      arbitrary.

   Feature Object

      A logically contiguous address space created from one or more disks,
      segments, regions or other feature objects through the use of an EVMS
      native feature. Feature Objects are essentially the same as Regions,
      except that Feature Objects contain EVMS-specific metadata.

   EVMS Logical Volume

      A mountable storage object. EVMS volumes contain metadata at the end of
      the underlying object, and at a minimum will have a static name and
      static minor number. Any object in EVMS can be made into an EVMS volume.

   Compatibility Logical Volume

      A mountable storage object that does not contain any EVMS native metadata.
      Many plug-ins in EVMS provide support for the capabilities of other
      volume management schemes. Volumes that are designated as "compatibility"
      are insured to be backwards compatible to that particular scheme because
      they do not contain any EVMS native metadata. Any disk, segment, or
      region can be a compatibility volume. Howevever, Feature objects cannot
      become compatibility volumes.

===============================================================================
2. Layer Definitions

   EVMS defines a layered architecture where plug-ins in each layer create
   abstractions of the layer(s) below. EVMS also allows most plug-ins to
   create abstractions of objects within the same layer. The following list
   defines these layers from the bottom up.

   Logical Device Managers

      The first layer is the logical device managers. These plug-ins
      communicate with the hardware device drivers to create the first EVMS
      objects. Currently, all local devices (most IDE and SCSI disks) are
      handled by a single plug-in. Future releases of EVMS might have
      additional device managers to do network device management, such as for
      disks on a storage area network (SAN).

   Segment Managers

      The second layer is the segment managers. In general, these plug-ins
      handle the segmenting, or partitioning, of disk drives. The engine
      components can replace partitioning programs, such as fdisk and disk
      druid, and the kernel components can replace the in-kernel disk
      partitioning code. Segment managers can also be "stacked," meaning that
      one segment manager can take input from another segment manager.

      Currently, there are three plug-in in this layer. The most commonly used
      is the DOS Segment Manager. This plug-in handles the DOS partitioning
      scheme, which is the scheme traditionally used by Linux. This plug-in
      also handles some special cases that arise when using OS/2 partitions.
      There is also a plug-in to handle the new GPT partitioning scheme on
      IA-64 machines, and a plug-in to handle S/390 partitions (CDL/LDL/CMS).
      Both of these plug-ins are still in development, and only support
      discovery and the I/O path. Other segment manager plug-ins may be added
      for supporting other partitioning schemes (e.g. Macintosh, Sun, and SGI).

   Region Managers

      The third layer in EVMS is the region managers. This layer is intended to
      provide a place for plug-ins that ensure compatibility with existing
      volume management schemes in Linux or other operating systems. Region
      managers are intended to model systems that provide a logical abstraction
      above disks or partitions.

      Like the segment managers, region managers can also be stacked. Therefore,
      the input object(s) to a region manager can be disks, segments, or other
      regions.

      There are currently four region manager plug-ins in EVMS. The first is
      the LVM plug-in that provides compatibility with the Linux LVM and allows
      the creation of volume groups or containers and logical volumes or
      regions.

      Two more plug-ins are the AIX and OS/2 region managers. The AIX LVM is
      very similar in functionality to the Linux LVM, and uses volume groups
      and logical volumes. The AIX plug-in is still under development. It
      currently provides most necessary kernel functionality, but is still
      limited in user-space. The OS/2 plug-in provides compatibility with
      volumes created under OS/2. Unlike the Linux and AIX LVMs, the OS/2 LVM
      is based on the linear linking of disk partitions, as well as bad-block-
      relocation.

      The fourth region manager plug-in is the Multi-Disk (MD) plug-in for
      RAID. This plug-in provides RAID levels linear, 0, 1, 4, and 5 in
      software. The ability to stack region managers allows combinations of
      RAID and LVM. For instance, a stripe set (RAID 0) could be used as a PV
      in LVM, or two LVM LVs could be mirrored using RAID 1.

   EVMS Features

      The next layer is EVMS Features. This layer is where new EVMS-native
      functionality is implemented. EVMS Features can be built on any object
      in the system, including disks, segments, regions, or other feature
      objects. EVMS Features all share a common type of metadata, which makes
      discovery of Feature objects much more efficient, and recovery of broken
      Features objects much more reliable.

      There are three Features currently available in EVMS. The first Feature
      is Drive Linking. This plug-in simply allows any number of objects to be
      linearly concatenated together into a single object.

      The second Feature is Bad-Block-Relocation (BBR). BBR monitors its I/O
      path and detects write failures (which may be caused by a damaged disk).
      In the event of such a failure, the data from that request is stored in a
      new location. BBR keeps track of this remapping, and any additional I/Os
      to that location are redirected to the new location.

      The third Feature is Snapshotting. Snapshotting provides a mechanism for
      creating a "frozen" copy of a volume at a single instant in time, without
      having to take that volume off-line. This is very useful for performing
      backups on a live system. Snapshots work with any volume (EVMS or
      compatibility), and can use any other available object as a backing
      store. After a snapshot is created, writes to the "original" volume cause
      the original contents of that location to be copied to the snapshot's
      storage object. Then, I/Os to the snapshot volume look like they come
      from the original at the time the snapshot was created.

   File System Interface Modules

       File System Interface Modules, or FSIMs, are the one layer of EVMS that
       only exists in the user-space engine. These plug-ins are used to provide
       coordination with the filesystems during certain volume management
       operations. For instance, when expanding or shrinking a volume, the
       filesystem must also be expanded or shrunk to the appropriate size.
       Ordering in this example is also important; a filesystem cannot be
       expanded before the volume, and a volume cannot be shrunk before the
       filesystem. The FSIMs allow EVMS to ensure this coordination and
       ordering.

       FSIMs also provide the ability to perform filesystem operations from one
       of the EVMS user interfaces. For instance, a user can make new
       filesystems and check existing filesystems by interacting with the FSIM.