1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896
|
// -*- Mode: Go; indent-tabs-mode: t -*-
/*
* Copyright (C) 2019-2020 Canonical Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 3 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package gadget
import (
"errors"
"fmt"
"os"
"path"
"sort"
"strings"
"github.com/snapcore/snapd/asserts"
"github.com/snapcore/snapd/dirs"
"github.com/snapcore/snapd/gadget/device"
"github.com/snapcore/snapd/gadget/quantity"
"github.com/snapcore/snapd/kernel"
"github.com/snapcore/snapd/logger"
"github.com/snapcore/snapd/osutil"
"github.com/snapcore/snapd/osutil/disks"
"github.com/snapcore/snapd/strutil"
)
var (
ErrNoUpdate = errors.New("nothing to update")
)
// GadgetData holds references to a gadget revision metadata and its data directory.
type GadgetData struct {
// Info is the gadget metadata
Info *Info
// XXX: should be GadgetRootDir
// RootDir is the root directory of gadget snap data
RootDir string
// KernelRootDir is the root directory of kernel snap data
KernelRootDir string
}
// UpdatePolicyFunc is a callback that evaluates the provided pair of
// (potentially not yet resolved) structures and returns true when the
// pair should be part of an update. It may also return a filter
// function for the resolved content when not all of the content
// should be applied as part of the update (e.g. when updating assets
// from the kernel snap).
type UpdatePolicyFunc func(from, to *LaidOutStructure) (bool, ResolvedContentFilterFunc)
// ResolvedContentFilterFunc is a callback that evaluates the given
// ResolvedContent and returns true if it should be applied as part of
// an update. This is relevant for e.g. asset updates that come from
// the kernel snap.
type ResolvedContentFilterFunc func(*ResolvedContent) bool
// ContentChange carries paths to files containing the content data being
// modified by the operation.
type ContentChange struct {
// Before is a path to a file containing the original data before the
// operation takes place (or took place in case of ContentRollback).
Before string
// After is a path to a file location of the data applied by the operation.
After string
}
type ContentOperation int
type ContentChangeAction int
const (
ContentWrite ContentOperation = iota
ContentUpdate
ContentRollback
ChangeAbort ContentChangeAction = iota
ChangeApply
ChangeIgnore
)
// ContentObserver allows for observing operations on the content of the gadget
// structures.
type ContentObserver interface {
// Observe is called to observe an pending or completed action, related
// to content being written, updated or being rolled back. In each of
// the scenarios, the target path is relative under the root. The role
// of the affected partition is needed as different assets are tracked
// depending on whether this is a boot or a seed partition.
//
// For a file write or update, the source path points to the content
// that will be written. When called during rollback, observe call
// happens after the original file has been restored (or removed if the
// file was added during the update), the source path is empty.
//
// Returning ChangeApply indicates that the observer agrees for a given
// change to be applied. When called with a ContentUpdate or
// ContentWrite operation, returning ChangeIgnore indicates that the
// change shall be ignored. ChangeAbort is expected to be returned along
// with a non-nil error.
Observe(op ContentOperation, partRole, targetRootDir, relativeTargetPath string, dataChange *ContentChange) (ContentChangeAction, error)
}
// ContentUpdateObserver allows for observing update (and potentially a
// rollback) of the gadget structure content.
type ContentUpdateObserver interface {
ContentObserver
// BeforeWrite is called when the backups of content that will get
// modified during the update are complete and update is ready to be
// applied.
BeforeWrite() error
// Canceled is called when the update has been canceled, or if changes
// were written and the update has been reverted.
Canceled() error
}
// searchVolumeWithTraitsAndMatchParts searches for a disk matching the given
// traits and returns the matched partitions.
func searchVolumeWithTraitsAndMatchParts(vol *Volume, traits DiskVolumeDeviceTraits, validateOpts *DiskVolumeValidationOptions) (disks.Disk, map[int]*OnDiskStructure, error) {
if validateOpts == nil {
validateOpts = &DiskVolumeValidationOptions{}
}
// iterate over the different traits, validating whether the resulting disk
// actually exists and matches the volume we have in the gadget.yaml
compatibleCandidate := func(candidate disks.Disk, method string, providedErr error) map[int]*OnDiskStructure {
if providedErr != nil {
if candidate != nil {
logger.Debugf("candidate disk %s not appropriate for volume %s because err: %v", candidate.KernelDeviceNode(), vol.Name, providedErr)
return nil
}
logger.Debugf("cannot locate disk for volume %s with method %s because err: %v", vol.Name, method, providedErr)
return nil
}
diskLayout, onDiskErr := OnDiskVolumeFromDevice(candidate.KernelDeviceNode())
if onDiskErr != nil {
// unexpected in reality, we already called one of
// DiskFromDeviceName or DiskFromDevicePath to get this reference,
// so it's unclear how those methods could return a disk that
// OnDiskVolumeFromDevice is unhappy about
logger.Debugf("cannot find on disk volume from candidate disk %s: %v", candidate.KernelDeviceNode(), onDiskErr)
return nil
}
// then try to validate it by laying out the volume
opts := &VolumeCompatibilityOptions{
AssumeCreatablePartitionsCreated: true,
AllowImplicitSystemData: validateOpts.AllowImplicitSystemData,
ExpectedStructureEncryption: validateOpts.ExpectedStructureEncryption,
}
gadgetStructToDiskStruct, ensureErr := EnsureVolumeCompatibility(vol, diskLayout, opts)
if ensureErr != nil {
logger.Debugf("candidate disk %s not appropriate for volume %s due to incompatibility: %v", candidate.KernelDeviceNode(), vol.Name, ensureErr)
return nil
}
// success, we found it
return gadgetStructToDiskStruct
}
// first try the kernel device path if it is set
if traits.OriginalDevicePath != "" {
disk, err := disks.DiskFromDevicePath(traits.OriginalDevicePath)
gadgetStructToDiskStruct := compatibleCandidate(disk, "device path", err)
if gadgetStructToDiskStruct != nil {
return disk, gadgetStructToDiskStruct, nil
}
}
// next try the kernel device node name
if traits.OriginalKernelPath != "" {
disk, err := disks.DiskFromDeviceName(traits.OriginalKernelPath)
gadgetStructToDiskStruct := compatibleCandidate(disk, "device name", err)
if gadgetStructToDiskStruct != nil {
return disk, gadgetStructToDiskStruct, nil
}
}
// next try the disk ID from the partition table
if traits.DiskID != "" {
// there isn't a way to find a disk using the disk ID directly, so we
// instead have to get all the disks and then check them all to see if
// the disk ID's match
blockdevDisks, err := disks.AllPhysicalDisks()
if err == nil {
for _, blockDevDisk := range blockdevDisks {
if blockDevDisk.DiskID() == traits.DiskID {
// found the block device for this Disk ID, get the
// disks.Disk for it
gadgetStructToDiskStruct := compatibleCandidate(blockDevDisk, "disk ID", err)
if gadgetStructToDiskStruct != nil {
return blockDevDisk, gadgetStructToDiskStruct, nil
}
// otherwise if it didn't match we keep iterating over
// the block devices, since we could have a situation
// where an attacker has cloned the disk and put their own
// content on it to attack the device and so there are two
// block devices with the same ID but non-matching
// structures
}
}
} else {
logger.Noticef("error getting all physical disks: %v", err)
}
}
// TODO: implement this final last ditch effort
// finally, try doing an inverse search using the individual
// structures to match a structure we measured previously to find a on disk
// device and then find a disk from that device and see if it matches
return nil, nil, fmt.Errorf("cannot find physical disk laid out to map with volume %s", vol.Name)
}
// IsCreatableAtInstall returns whether the gadget structure would be created at
// install - currently that is only ubuntu-save, ubuntu-data, and ubuntu-boot
func IsCreatableAtInstall(gv *VolumeStructure) bool {
// a structure is creatable at install if it is one of the roles for
// system-save, system-data, or system-boot
switch gv.Role {
case SystemSave, SystemData, SystemBoot:
return true
default:
return false
}
}
func isCompatibleSchema(gadgetSchema, diskSchema string) bool {
switch gadgetSchema {
// XXX: "mbr,gpt" is currently unsupported
case "", "gpt":
return diskSchema == "gpt"
case "mbr":
return diskSchema == "dos"
case "emmc":
return diskSchema == "emmc"
default:
return false
}
}
func isVolumeEMMC(vol *Volume) bool {
return vol.Schema == schemaEMMC
}
func onDiskStructureIsLikelyImplicitSystemDataRole(gadgetVolume *Volume, diskLayout *OnDiskVolume, s OnDiskStructure) bool {
// in uc16/uc18 we used to allow system-data to be implicit / missing from
// the gadget.yaml in which case we won't have system-data in the laidOutVol
// but it will be in diskLayout, so we sometimes need to check if a given on
// disk partition looks like it was created implicitly by ubuntu-image as
// specified via the defaults in
// https://github.com/canonical/ubuntu-image-legacy/blob/master/ubuntu_image/parser.py#L568-L589
// namely it must meet the following conditions:
// * fs is ext4
// * partition type is "Linux filesystem data"
// * fs label is "writable"
// * this on disk structure is last on the disk
// * there is exactly one more structure on disk than partitions in the
// gadget
// * there is no system-data role in the gadget.yaml
// note: we specifically do not check the size of the structure because it
// likely was resized, but it also could have not been resized if there
// ended up being less than 10% free space as per the resize script in the
// initramfs:
// https://github.com/snapcore/core-build/blob/master/initramfs/scripts/local-premount/resize
// bare structures don't show up on disk, so we can't include them
// when calculating how many "structures" are in gadgetVolume to
// ensure that there is only one extra OnDiskStructure at the end
numPartsInGadget := 0
for _, s := range gadgetVolume.Structure {
if s.IsPartition() {
numPartsInGadget++
}
// also check for explicit system-data role
if s.Role == SystemData {
// s can't be implicit system-data since there is an explicit
// system-data
return false
}
}
numPartsOnDisk := len(diskLayout.Structure)
return s.PartitionFSType == "ext4" &&
(s.Type == "0FC63DAF-8483-4772-8E79-3D69D8477DE4" || s.Type == "83") &&
s.PartitionFSLabel == "writable" &&
// DiskIndex is 1-based
s.DiskIndex == numPartsOnDisk &&
numPartsInGadget+1 == numPartsOnDisk
}
func ensureVolumeEMMCCompatibility(gadgetVolume *Volume, diskVolume *OnDiskVolume) (map[int]*OnDiskStructure, error) {
gadgetStructIdxToOnDiskStruct := map[int]*OnDiskStructure{}
for _, gs := range gadgetVolume.Structure {
// ensure the device node exists
// TODO: maybe better to check /sys/block/
// example output from CM5:
// $ ls /sys/block
// mmcblk0 mmcblk0boot0 mmcblk0boot1
emmcNode := fmt.Sprintf("%s%s", diskVolume.Device, gs.Name)
if _, err := os.Stat(path.Join(dirs.GlobalRootDir, emmcNode)); err != nil {
return nil, fmt.Errorf("emmc disk partition %s is specified, but no such disk: %s",
gs.Name, path.Join(dirs.GlobalRootDir, emmcNode))
}
ds := &OnDiskStructure{
Name: gs.Name,
Node: emmcNode,
}
gadgetStructIdxToOnDiskStruct[gs.YamlIndex] = ds
}
return gadgetStructIdxToOnDiskStruct, nil
}
// VolumeCompatibilityOptions is a set of options for determining how
// strict to be when evaluating whether an on-disk structure matches a laid out
// structure.
type VolumeCompatibilityOptions struct {
// AssumeCreatablePartitionsCreated will assume that all partitions such as
// ubuntu-data, ubuntu-save, etc. that are creatable in install mode have
// already been created and thus must be already exactly matching that which
// is in the gadget.yaml.
AssumeCreatablePartitionsCreated bool
// AllowImplicitSystemData allows the system-data role to be missing from
// the gadget volume as was allowed in UC18 and UC16 where the system-data
// partition would be dynamically inserted into the image at image build
// time by ubuntu-image without being mentioned in the gadget.yaml.
AllowImplicitSystemData bool
// ExpectedStructureEncryption is a map of the structure name to information
// about the encrypted partitions that can be used to validate whether a
// given structure should be accepted as an encrypted partition.
ExpectedStructureEncryption map[string]StructureEncryptionParameters
}
// EnsureVolumeCompatibility checks compatibility between a gadget volume and a
// real disk. It returns a map of the gadget structures yaml indexes to disk
// structures that was possible to match.
// TODO change to returning OnDiskAndGadgetStructurePair
func EnsureVolumeCompatibility(gadgetVolume *Volume, diskVolume *OnDiskVolume, opts *VolumeCompatibilityOptions) (map[int]*OnDiskStructure, error) {
gadgetStructIdxToOnDiskStruct := map[int]*OnDiskStructure{}
if opts == nil {
opts = &VolumeCompatibilityOptions{}
}
logger.Debugf("checking volume compatibility between gadget volume %s (partial: %v) and disk %s",
gadgetVolume.Name, gadgetVolume.Partial, diskVolume.Device)
// eMMC will not follow the normal validation rules, and will instead
// need some different validation so we can make sure the disk is compatible
// with the eMMC structures
if isVolumeEMMC(gadgetVolume) {
return ensureVolumeEMMCCompatibility(gadgetVolume, diskVolume)
}
eq := func(ds *OnDiskStructure, vss []VolumeStructure, vssIdx int) (bool, string) {
gs := &vss[vssIdx]
// name mismatch
if gs.Name != ds.Name {
// partitions have no names in MBR so bypass the name check
if gadgetVolume.Schema != "mbr" {
// don't return a reason if the names don't match
return false, ""
}
}
// start offset mismatch
if err := CheckValidStartOffset(ds.StartOffset, vss, vssIdx); err != nil {
return false, fmt.Sprintf("disk partition %q %v", ds.Name, err)
}
maxSz := effectivePartSize(gs)
switch {
// on disk size too small
case ds.Size < gs.MinSize:
return false, fmt.Sprintf("on disk size %d (%s) is smaller than gadget min size %d (%s)",
ds.Size, ds.Size.IECString(), gs.MinSize, gs.MinSize.IECString())
// on disk size too large
case ds.Size > maxSz:
// larger on disk size is allowed specifically only for system-data
if gs.Role != SystemData {
return false, fmt.Sprintf("on disk size %d (%s) is larger than gadget size %d (%s) (and the role should not be expanded)",
ds.Size, ds.Size.IECString(), maxSz, maxSz.IECString())
}
}
// If we got to this point, the structure on disk has the same
// name, and compatible size and offset, so the last thing to
// check is that the filesystem matches (or that we don't care
// about the filesystem).
// first handle the strict case where this partition was created at
// install in case it is an encrypted one
if opts.AssumeCreatablePartitionsCreated && IsCreatableAtInstall(gs) {
// only partitions that are creatable at install can be encrypted,
// check if this partition was encrypted
if encTypeParams, ok := opts.ExpectedStructureEncryption[gs.Name]; ok {
if encTypeParams.Method == "" {
return false, "encrypted structure parameter missing required parameter \"method\""
}
// for now we don't handle any other keys, but in case they show
// up in the wild for debugging purposes log off the key name
for k := range encTypeParams.unknownKeys {
if k != "method" {
logger.Noticef("ignoring unknown expected encryption structure parameter %q", k)
}
}
switch encTypeParams.Method {
case EncryptionLUKS:
// then this partition is expected to have been encrypted, the
// filesystem label on disk will need "-enc" appended
if ds.PartitionFSLabel != gs.Name+"-enc" {
return false, fmt.Sprintf("partition %[1]s is expected to be encrypted but is not named %[1]s-enc", gs.Name)
}
// the filesystem should also be "crypto_LUKS"
if ds.PartitionFSType != "crypto_LUKS" {
return false, fmt.Sprintf("partition %[1]s is expected to be encrypted but does not have an encrypted filesystem", gs.Name)
}
// at this point the partition matches
return true, ""
default:
return false, fmt.Sprintf("unsupported encrypted partition type %q", encTypeParams.Method)
}
}
// for non-encrypted partitions that were created at install, the
// below logic still applies
}
if opts.AssumeCreatablePartitionsCreated || !IsCreatableAtInstall(gs) {
// we assume that this partition has already been created
// successfully - either because this function was forced to(as is
// the case when doing gadget asset updates), or because this
// structure is not created during install
// note that we only check the filesystem if the gadget specified a
// filesystem, this is to allow cases where a structure in the
// gadget has a image, but does not specify the filesystem because
// it is some binary blob from a hardware vendor for non-Linux
// components on the device that _just so happen_ to also have a
// filesystem when the image is deployed to a partition. In this
// case we don't care about the filesystem at all because snapd does
// not touch it, unless a gadget asset update says to update that
// image file with a new binary image file. This also covers the
// partial filesystem case.
if gs.Filesystem != "" && gs.LinuxFilesystem() != ds.PartitionFSType {
// use more specific error message for structures that are
// not creatable at install when we are not being strict
if !IsCreatableAtInstall(gs) && !opts.AssumeCreatablePartitionsCreated {
return false, fmt.Sprintf("filesystems do not match (and the partition is not creatable at install): declared as %s, got %s", gs.Filesystem, ds.PartitionFSType)
}
// otherwise generic
return false, fmt.Sprintf("filesystems do not match: declared as %s, got %s", gs.Filesystem, ds.PartitionFSType)
}
}
// otherwise if we got here things are matching
return true, ""
}
gadgetContains := func(vss []VolumeStructure, ds *OnDiskStructure) (bool, string) {
reasonAbsent := ""
for vssIdx := range vss {
matches, reasonNotMatches := eq(ds, vss, vssIdx)
if matches {
return true, ""
}
// TODO: handle multiple error cases for DOS disks and fail early
// for GPT disks since we should not have multiple non-empty reasons
// for not matching for GPT disks, as that would require two YAML
// structures with the same name to be considered as candidates for
// a given on disk structure, and we do not allow duplicated
// structure names in the YAML at all via ValidateVolume.
//
// For DOS, since we cannot check the partition names, there will
// always be a reason if there was not a match, in which case we
// only want to return an error after we have finished searching the
// full haystack and didn't find any matches whatsoever. Note that
// the YAML structure that "should" have matched the on disk one we
// are looking for but doesn't because of some problem like wrong
// size or wrong filesystem may not be the last one, so returning
// only the last error like we do here is wrong. We should include
// all reasons why so the user can see which structure was the
// "closest" to what we were searching for so they can fix their
// gadget.yaml or on disk partitions so that it matches.
if reasonNotMatches != "" {
reasonAbsent = reasonNotMatches
}
}
if opts.AllowImplicitSystemData {
// Handle the case of an implicit system-data role before giving up;
// we used to allow system-data to be implicit from the gadget.yaml.
// In that case we won't have system-data in the gadget volume but it
// could be on the disk, so if after searching all the gadget
// structures we don't find the disk structure, check if we might
// be dealing with a structure that looks like the implicit
// system-data that ubuntu-image would have created.
if onDiskStructureIsLikelyImplicitSystemDataRole(gadgetVolume, diskVolume, *ds) {
return true, ""
}
}
return false, reasonAbsent
}
onDiskContains := func(dss []OnDiskStructure, vss []VolumeStructure, vssIdx int) (*OnDiskStructure, string) {
reasonAbsent := ""
for _, ds := range dss {
matches, reasonNotMatches := eq(&ds, vss, vssIdx)
if matches {
return &ds, ""
}
// this has the effect of only returning the last non-empty reason
// string
if reasonNotMatches != "" {
reasonAbsent = reasonNotMatches
}
}
return nil, reasonAbsent
}
// check size of volumes
lastUsableByte := quantity.Size(diskVolume.UsableSectorsEnd) * diskVolume.SectorSize
if gadgetVolume.MinSize() > lastUsableByte {
return nil, fmt.Errorf("device %v (last usable byte at %s) is too small to fit the requested minimal size (%s)", diskVolume.Device,
lastUsableByte.IECString(), gadgetVolume.MinSize().IECString())
}
// check that the sizes of all structures in the gadget are multiples of
// the disk sector size (unless the structure is the MBR)
for _, vs := range gadgetVolume.Structure {
if !vs.IsRoleMBR() {
for _, sz := range []quantity.Size{vs.MinSize, vs.Size} {
if sz%diskVolume.SectorSize != 0 {
return nil, fmt.Errorf("gadget volume structure %q size is not a multiple of disk sector size %v",
vs.Name, diskVolume.SectorSize)
}
}
}
}
// Check if gadget schema is compatible with the disk, when defined
if (!gadgetVolume.HasPartial(PartialSchema) || gadgetVolume.Schema != "") &&
!isCompatibleSchema(gadgetVolume.Schema, diskVolume.Schema) {
return nil, fmt.Errorf("disk partitioning schema %q doesn't match gadget schema %q", diskVolume.Schema, gadgetVolume.Schema)
}
// Check disk ID if defined in gadget
if gadgetVolume.ID != "" && gadgetVolume.ID != diskVolume.ID {
return nil, fmt.Errorf("disk ID %q doesn't match gadget volume ID %q", diskVolume.ID, gadgetVolume.ID)
}
// Check if all existing device partitions are also in gadget
// (unless partial strucuture).
if !gadgetVolume.HasPartial(PartialStructure) {
for _, ds := range diskVolume.Structure {
present, reasonAbsent := gadgetContains(gadgetVolume.Structure, &ds)
if !present {
if reasonAbsent != "" {
// use the right format so that it can be
// appended to the error message
reasonAbsent = fmt.Sprintf(": %s", reasonAbsent)
}
return nil, fmt.Errorf("cannot find disk partition %s (starting at %d) in gadget%s", ds.Node, ds.StartOffset, reasonAbsent)
}
}
}
// check all structures in the gadget are present on the disk, or have a
// valid excuse for absence (i.e. mbr or creatable structures at install)
var prevDs *OnDiskStructure
for vssIdx, gs := range gadgetVolume.Structure {
// we ignore reasonAbsent here since if there was an extra on disk
// structure that didn't match something in the YAML, we would have
// caught it above, this loop can only ever identify structures in the
// YAML that are not on disk at all
if ds, _ := onDiskContains(diskVolume.Structure, gadgetVolume.Structure, vssIdx); ds != nil {
gadgetStructIdxToOnDiskStruct[gs.YamlIndex] = ds
prevDs = ds
continue
}
// otherwise not present, figure out if it has a valid excuse
if !gs.IsPartition() {
// Raw structures like mbr or other "bare" type will not be
// identified by linux and thus should be skipped as they will not
// show up on the disk. However, we insert a value in the map,
// assuming they are where expected.
offset := gs.Offset
if offset == nil {
// Case only possible if min-size is being used so we are in
// an update. We will always have prevDs set because as a
// minimum the first partition will have offset defined.
// In any case, if using bare partitions, it is not a great
// idea to have some previous partition with a valid range
// of sizes.
offsetV := prevDs.StartOffset + quantity.Offset(prevDs.Size)
offset = &offsetV
}
ds := &OnDiskStructure{
Name: gs.Name,
Type: gs.Type,
StartOffset: *offset,
Size: gs.Size,
}
gadgetStructIdxToOnDiskStruct[gs.YamlIndex] = ds
prevDs = ds
continue
}
// allow structures that are creatable during install if we don't assume
// created partitions to already exist
if IsCreatableAtInstall(&gs) && !opts.AssumeCreatablePartitionsCreated {
continue
}
return nil, fmt.Errorf("cannot find gadget structure %q on disk", gs.Name)
}
// finally ensure that all encrypted partitions mentioned in the options are
// present in the gadget.yaml (and thus will also need to have been present
// on the disk)
for gadgetLabel := range opts.ExpectedStructureEncryption {
found := false
for _, gs := range gadgetVolume.Structure {
if gs.Name == gadgetLabel {
found = true
break
}
}
if !found {
return nil, fmt.Errorf("expected encrypted structure %s not present in gadget", gadgetLabel)
}
}
return gadgetStructIdxToOnDiskStruct, nil
}
func diskTraitsFromEMMCDevice(diskLayout *OnDiskVolume, mmc disks.Disk, vol *Volume) (res DiskVolumeDeviceTraits, err error) {
mappedStructures := make([]DiskStructureDeviceTraits, 0, len(vol.Structure))
for _, vs := range vol.Structure {
mmcPartDev := fmt.Sprintf("%s%s", mmc.KernelDeviceNode(), vs.Name)
mmcPart, err := disks.DiskFromDeviceName(mmcPartDev)
if err != nil {
return res, fmt.Errorf("cannot get disk for device %s: %v", mmcPartDev, err)
}
sz, err := mmcPart.SizeInBytes()
if err != nil {
return res, fmt.Errorf("cannot get size of device %s: %v", mmcPartDev, err)
}
mappedStructures = append(mappedStructures, DiskStructureDeviceTraits{
OriginalDevicePath: mmcPart.KernelDevicePath(),
OriginalKernelPath: mmcPart.KernelDeviceNode(),
Size: quantity.Size(sz),
})
}
return DiskVolumeDeviceTraits{
OriginalDevicePath: mmc.KernelDevicePath(),
OriginalKernelPath: mmc.KernelDeviceNode(),
DiskID: mmc.DiskID(),
Structure: mappedStructures,
Size: diskLayout.Size,
SectorSize: diskLayout.SectorSize,
Schema: mmc.Schema(),
}, nil
}
// TODO:ICE: remove this as we only support LUKS (and ICE is a variant of LUKS now)
type DiskEncryptionMethod string
const (
// values for the "method" key of encrypted structure information
// standard LUKS as it is used for automatic FDE using SecureBoot and TPM
// 2.0 in UC20+
EncryptionLUKS DiskEncryptionMethod = "LUKS"
)
// DiskVolumeValidationOptions is a set of options on how to validate a disk to
// volume mapping for a specific disk/volume pair. It is closely related to the
// options provided to EnsureVolumeCompatibility via
// EnsureVolumeCompatibilityOptions.
type DiskVolumeValidationOptions struct {
// AllowImplicitSystemData has the same meaning as the eponymously named
// field in VolumeCompatibilityOptions.
AllowImplicitSystemData bool
// ExpectedEncryptedPartitions is a map of the names (gadget structure
// names) of partitions that are encrypted on the volume and information
// about that encryption.
ExpectedStructureEncryption map[string]StructureEncryptionParameters
}
// DiskTraitsFromDeviceAndValidate takes a gadget volume and an
// expected disk device path and confirms that they are compatible,
// and then builds up the disk volume traits for that device. If the
// laid out volume is not compatible with the disk structure for the
// specified device an error is returned.
func DiskTraitsFromDeviceAndValidate(vol *Volume, dev string, opts *DiskVolumeValidationOptions) (res DiskVolumeDeviceTraits, err error) {
if opts == nil {
opts = &DiskVolumeValidationOptions{}
}
// get the disk layout for this device
diskLayout, err := OnDiskVolumeFromDevice(dev)
if err != nil {
return res, fmt.Errorf("cannot read %v partitions for candidate volume %s: %v", dev, vol.Name, err)
}
// ensure that the on disk volume and the gadget volume are actually
// compatible
volCompatOpts := &VolumeCompatibilityOptions{
// at this point all partitions should be created
AssumeCreatablePartitionsCreated: true,
// provide the other opts as we were provided
AllowImplicitSystemData: opts.AllowImplicitSystemData,
ExpectedStructureEncryption: opts.ExpectedStructureEncryption,
}
gadgetToDiskStruct, err := EnsureVolumeCompatibility(vol, diskLayout, volCompatOpts)
if err != nil {
return res, fmt.Errorf("volume %s is not compatible with disk %s: %v", vol.Name, dev, err)
}
// also get a Disk{} interface for this device
disk, err := disks.DiskFromDeviceName(dev)
if err != nil {
return res, fmt.Errorf("cannot get disk for device %s: %v", dev, err)
}
// For eMMC block devices, there will be both partitions, but also non-partitions in
// the form of pseudo-devices as boot0, boot1, rpmb. Depending on the volume given, we
// will handle the traits differently.
if vol.Schema == schemaEMMC {
// The volume is targeting eMMC specific "partitions". These will not show up
// in any normal setting, but appear as different devices instead. We have to handle
// this.
return diskTraitsFromEMMCDevice(diskLayout, disk, vol)
}
diskPartitions, err := disk.Partitions()
if err != nil {
return res, fmt.Errorf("cannot get partitions for disk device %s: %v", dev, err)
}
// make a map of start offsets to partitions for lookup
diskPartitionsByOffset := make(map[uint64]disks.Partition, len(diskPartitions))
for _, p := range diskPartitions {
diskPartitionsByOffset[p.StartInBytes] = p
}
mappedStructures := make([]DiskStructureDeviceTraits, 0, len(diskLayout.Structure))
// create the traits for each structure looping over the gadget structures
// to ensure that extra partitions don't sneak in - we double check things
// again below this loop
for _, structure := range vol.Structure {
// don't create traits for non-partitions, there is nothing we can
// measure on the disk about bare structures other than perhaps reading
// their content - the fact that bare structures do not overlap with
// real partitions will have been validated when the YAML was validated
// previously
if !structure.IsPartition() {
continue
}
ds, ok := gadgetToDiskStruct[structure.YamlIndex]
if !ok {
return res, fmt.Errorf("internal error: all disk structures should have been matched")
}
part, ok := diskPartitionsByOffset[uint64(ds.StartOffset)]
if !ok {
// unexpected error - somehow this structure's start offset is not
// present in the OnDiskVolume, which is unexpected because we
// validated that the gadget volume structure matches the on disk
// volume
return res, fmt.Errorf("internal error: inconsistent disk structures from gadget and disks.Disk: structure starting at %d missing on disk", ds.StartOffset)
}
ms := DiskStructureDeviceTraits{
Size: quantity.Size(part.SizeInBytes),
Offset: quantity.Offset(part.StartInBytes),
PartitionUUID: part.PartitionUUID,
OriginalKernelPath: part.KernelDeviceNode,
OriginalDevicePath: part.KernelDevicePath,
PartitionType: part.PartitionType,
PartitionLabel: part.PartitionLabel, // this will be empty on dos disks
FilesystemLabel: part.FilesystemLabel, // blkid encoded
FilesystemUUID: part.FilesystemUUID, // blkid encoded
FilesystemType: part.FilesystemType,
}
mappedStructures = append(mappedStructures, ms)
// delete this partition from the map
delete(diskPartitionsByOffset, uint64(ds.StartOffset))
}
// We should have deleted all structures from diskPartitionsByOffset that
// are in the gadget.yaml volume, however there is a small
// possibility (mainly due to bugs) where we could still have partitions in
// diskPartitionsByOffset. So we check to make sure there are no partitions
// left over.
// However, the one notable exception to this is in the case of legacy UC16
// or UC18 gadgets where the system-data role could have been left out and
// ubuntu-image would dynamically create the partition. In this case, we
// ought to just ignore this on-disk structure since it is not in the
// gadget.yaml, and the primary use case of tracking disks and structures is
// for gadget asset update, but by definition something which is not in the
// gadget.yaml cannot be updated via gadget asset updates.
switch len(diskPartitionsByOffset) {
case 0:
// expected, no implicit system-data
break
case 1:
// could be implicit system-data
if opts.AllowImplicitSystemData {
var part disks.Partition
for _, part = range diskPartitionsByOffset {
break
}
s, err := OnDiskStructureFromPartition(part)
if err != nil {
return res, err
}
if onDiskStructureIsLikelyImplicitSystemDataRole(vol, diskLayout, s) {
// it is likely the implicit system-data
logger.Debugf("Identified implicit system-data role on system as %s", s.Node)
break
}
}
fallthrough
default:
// we for sure have left over partitions that should have been in the
// gadget.yaml - make a nice string with what partitions are leftover
leftovers := []string{}
for _, part := range diskPartitionsByOffset {
leftovers = append(leftovers, part.KernelDeviceNode)
}
if vol.HasPartial(PartialStructure) {
logger.Debugf("additional partitions on disk %s ignored as the gadget has partial structures: %v", disk.KernelDeviceNode(), leftovers)
} else {
// this is an internal error because to get here we would have had to
// pass validation in EnsureVolumeCompatibility but then still have
// extra partitions - the only non-buggy situation where that function
// passes validation but leaves partitions on disk not in the YAML is
// the implicit system-data role handled above
return res, fmt.Errorf("internal error: unexpected additional partitions on disk %s not present in the gadget layout: %v", disk.KernelDeviceNode(), leftovers)
}
}
return DiskVolumeDeviceTraits{
OriginalDevicePath: disk.KernelDevicePath(),
OriginalKernelPath: dev,
DiskID: diskLayout.ID,
Structure: mappedStructures,
Size: diskLayout.Size,
SectorSize: diskLayout.SectorSize,
Schema: disk.Schema(),
StructureEncryption: opts.ExpectedStructureEncryption,
}, nil
}
// unable to proceed with the gadget asset update, but not fatal to the refresh
// operation itself
var errSkipUpdateProceedRefresh = errors.New("cannot identify disk for gadget asset update")
// buildNewVolumeToDeviceMapping builds a DiskVolumeDeviceTraits for only the
// volume containing the system-boot role, when we cannot load an existing
// traits object from disk-mapping.json. It is meant to be used only with all
// UC16/UC18 installs as well as UC20 installs from before we started writing
// disk-mapping.json during install mode.
func buildNewVolumeToDeviceMapping(mod Model, oldVolumes, newVolumes map[string]*Volume) (map[string]DiskVolumeDeviceTraits, error) {
var likelySystemBootVolume string
isPreUC20 := (mod.Grade() == asserts.ModelGradeUnset)
if len(oldVolumes) == 1 {
// If we only have one volume, then that is the volume we are concerned
// with, we do not validate that it has a system-boot role on it like
// we do in the multi-volume case below, this is because we used to
// allow installation of gadgets that have no system-boot role on them
// at all
// then we only have one volume to be concerned with
for volName := range oldVolumes {
likelySystemBootVolume = volName
}
} else {
// we need to pick the volume, since updates for this setup are best
// effort and mainly focused on the main volume with system-* roles
// on it, we need to pick the volume with that role
volumeLoop:
for volName, vol := range oldVolumes {
for _, structure := range vol.Structure {
if structure.Role == SystemBoot {
// this is the volume
likelySystemBootVolume = volName
break volumeLoop
}
}
}
}
if likelySystemBootVolume == "" {
// this is only possible in the case where there is more than one volume
// and we didn't find system-boot anywhere, in this case for pre-UC20
// we use a non-fatal error and just don't perform any update - this was
// always the old behavior so we are not regressing here
if isPreUC20 {
logger.Noticef("WARNING: cannot identify disk for gadget asset update of volume %s: unable to find any volume with system-boot role on it", likelySystemBootVolume)
return nil, errSkipUpdateProceedRefresh
}
// on UC20 and later however this is a fatal error, we should never have
// allowed installation of a gadget which does not have the system-boot
// role on it
return nil, fmt.Errorf("cannot find any volume with system-boot, gadget is broken")
}
vol := newVolumes[likelySystemBootVolume]
// search for matching devices that correspond to the gadget volume
dev, err := MaybeDeviceForVolume(vol)
if err != nil {
// TODO: should this be a fatal error?
return nil, err
} else if dev == "" {
// couldn't find a disk at all, pre-UC20 we just warn about this
// but let the update continue
if isPreUC20 {
logger.Noticef("WARNING: cannot identify disk for gadget asset update of volume %s", likelySystemBootVolume)
return nil, errSkipUpdateProceedRefresh
}
// fatal error on UC20+
return nil, fmt.Errorf("cannot identify disk for gadget asset update of volume %s", likelySystemBootVolume)
}
// we found the device, construct the traits with validation options
validateOpts := &DiskVolumeValidationOptions{
// allow implicit system-data on pre-uc20 only
AllowImplicitSystemData: isPreUC20,
}
// setup encrypted structure information to perform validation if this
// device used encryption
if !isPreUC20 {
// TODO: this needs to check if the specified partitions are ICE when
// we support ICE too
// check if there is a marker file written, that will indicate if
// encryption was turned on
if device.HasEncryptedMarkerUnder(dirs.SnapFDEDir) {
// then we have the crypto marker file for encryption
// cross-validation between ubuntu-data and ubuntu-save stored from
// install mode, so mark ubuntu-save and data as expected to be
// encrypted
validateOpts.ExpectedStructureEncryption = map[string]StructureEncryptionParameters{
"ubuntu-data": {Method: EncryptionLUKS},
"ubuntu-save": {Method: EncryptionLUKS},
}
}
}
traits, err := DiskTraitsFromDeviceAndValidate(vol, dev, validateOpts)
if err != nil {
if isPreUC20 {
logger.Noticef("WARNING: not applying gadget asset updates on main system-boot volume due to error while finding disk traits: %v", err)
return nil, errSkipUpdateProceedRefresh
}
return nil, err
}
// TODO: should we save the traits here so they can be re-used in another
// future update routine?
return map[string]DiskVolumeDeviceTraits{
likelySystemBootVolume: traits,
}, nil
}
// StructureLocation represents the location of a structure for updating
// purposes. Either Device + Offset must be set for a raw structure without a
// filesystem, or RootMountPoint must be set for structures with a
// filesystem.
type StructureLocation struct {
// Device is the kernel device node path such as /dev/vda1 for the
// structure's backing physical disk.
Device string
// Offset is the offset from 0 for the physical disk that this structure
// starts at.
Offset quantity.Offset
// RootMountPoint is the directory where the root directory of the structure
// is mounted read/write. There may be other mount points for this structure
// on the system, but this one is guaranteed to be writable and thus
// suitable for gadget asset updates.
RootMountPoint string
}
func buildLocationsForVolumeStructures(vol *Volume, disk disks.Disk, structs map[int]*OnDiskStructure, encryptionParams map[string]StructureEncryptionParameters) (map[int]StructureLocation, error) {
locations := make(map[int]StructureLocation)
// the index here is 0-based and is equal to VolumeStructure.YamlIndex
for volYamlIndex, volStruct := range vol.Structure {
structStartOffset := structs[volYamlIndex].StartOffset
loc := StructureLocation{}
if volStruct.HasFilesystem() {
// Here we know what disk is associated with this volume, so we
// just need to find what partition is associated with this
// structure to find it's root mount points. On GPT since
// partition labels/names are unique in the partition table, we
// could do a lookup by matching partition label, but this won't
// work on MBR which doesn't have such a concept, so instead we
// use the start offset to locate which disk partition this
// structure is equal to.
partitions, err := disk.Partitions()
if err != nil {
return nil, err
}
var foundP disks.Partition
found := false
for _, p := range partitions {
if p.StartInBytes == uint64(structStartOffset) {
foundP = p
found = true
break
}
}
if !found {
return nil, fmt.Errorf("cannot locate structure %d on volume %s: no matching start offset", volYamlIndex, vol.Name)
}
// if this structure is an encrypted one, then we can't just
// get the root mount points for the device node, we would need
// to find the decrypted mapper device for the encrypted device
// node and then find the root mount point of the mapper device
if _, ok := encryptionParams[volStruct.Name]; ok {
logger.Noticef("gadget asset update for assets on encrypted partition %s unsupported", volStruct.Name)
// leaving this structure as an empty location will
// mean when an update to this structure is actually
// performed it will fail, but we won't fail updates to
// other structures - it is treated like an unmounted
// partition
locations[volYamlIndex] = loc
continue
}
// otherwise normal unencrypted filesystem, find the rw mount
// points
mountpts, err := disks.MountPointsForPartitionRoot(foundP, map[string]string{"rw": ""})
if err != nil {
return nil, fmt.Errorf("cannot locate structure %d on volume %s: error searching for root mount points: %v", volYamlIndex, vol.Name, err)
}
var mountpt string
if len(mountpts) == 0 {
// this filesystem is not already mounted, we probably
// should mount it in order to proceed with the update?
// TODO: do something better here?
logger.Noticef("structure %d on volume %s (%s) is not mounted read/write anywhere to be able to update it", volYamlIndex, vol.Name, foundP.KernelDeviceNode)
} else {
// use the first one, it doesn't really matter to us
// which one is used to update the contents
mountpt = mountpts[0]
}
loc.RootMountPoint = mountpt
} else {
// no filesystem, the device for this one is just the device
// for the disk itself
loc.Device = disk.KernelDeviceNode()
loc.Offset = structStartOffset
// Specifically for eMMC devices, the boot0 and boot1 partitions are not
// really partitions, but actually pseudo devices. So even though there
// is no filesystem, we still must address each sub-device like if the
// device had a filesystem
if vol.Schema == schemaEMMC {
switch volStruct.Name {
case "boot0", "boot1":
loc.Device += volStruct.Name
// rpmb also exists but we do not handle this
default:
return nil, fmt.Errorf("structure %s on volume %s is not a valid eMMC partition", volStruct.Name, vol.Name)
}
}
}
locations[volYamlIndex] = loc
}
return locations, nil
}
// buildVolumeStructureToLocation builds a map of gadget volumes to
// locations and to matched disk structures.
func buildVolumeStructureToLocation(mod Model,
oldVolumes map[string]*Volume,
newVolumes map[string]*Volume,
volToDeviceMapping map[string]DiskVolumeDeviceTraits,
missingInitialMapping bool,
) (map[string]map[int]StructureLocation, map[string]map[int]*OnDiskStructure, error) {
isPreUC20 := (mod.Grade() == asserts.ModelGradeUnset)
// helper function for handling non-fatal errors on pre-UC20
maybeFatalError := func(err error) error {
if missingInitialMapping && isPreUC20 {
// this is not a fatal error on pre-UC20
logger.Noticef("WARNING: not applying gadget asset updates on main system-boot volume due to error mapping volume to physical disk: %v", err)
return errSkipUpdateProceedRefresh
}
return err
}
volumeStructureToLocation := make(map[string]map[int]StructureLocation, len(oldVolumes))
gadgetVolToPartMap := make(map[string]map[int]*OnDiskStructure, len(oldVolumes))
// now for each volume, iterate over the structures, putting the
// necessary info into the map for that volume as we iterate
// this loop assumes that none of those things are different between the
// new and old volume, which may not be true in the case where an
// unsupported structure change is present in the new one, but we check that
// situation after we have built the mapping
for volName, diskDeviceTraits := range volToDeviceMapping {
gadgetVolToPartMap[volName] = make(map[int]*OnDiskStructure)
oldVol, ok := oldVolumes[volName]
if !ok {
return nil, nil, fmt.Errorf("internal error: volume %s not present in gadget.yaml but present in traits mapping", volName)
}
newVol, ok := newVolumes[volName]
if !ok {
return nil, nil, fmt.Errorf("internal error: missing volume %s", volName)
}
// find the disk associated with this volume using the traits we
// measured for this volume
validateOpts := &DiskVolumeValidationOptions{
// implicit system-data role only allowed on pre UC20 systems
AllowImplicitSystemData: isPreUC20,
ExpectedStructureEncryption: diskDeviceTraits.StructureEncryption,
}
disk, gadgetToDiskStruct, err := searchVolumeWithTraitsAndMatchParts(newVol, diskDeviceTraits, validateOpts)
if err != nil {
dieErr := fmt.Errorf("could not map volume %s from gadget.yaml to any physical disk: %v", volName, err)
return nil, nil, maybeFatalError(dieErr)
}
gadgetVolToPartMap[volName] = gadgetToDiskStruct
locations, err := buildLocationsForVolumeStructures(oldVol, disk, gadgetToDiskStruct, diskDeviceTraits.StructureEncryption)
if err != nil {
return nil, nil, maybeFatalError(err)
}
volumeStructureToLocation[volName] = locations
}
return volumeStructureToLocation, gadgetVolToPartMap, nil
}
func MockVolumeStructureToLocationMap(f func(_ Model, _, _ map[string]*Volume) (
map[string]map[int]StructureLocation, map[string]map[int]*OnDiskStructure, error)) (restore func()) {
old := volumeStructureToLocationMap
volumeStructureToLocationMap = f
return func() {
volumeStructureToLocationMap = old
}
}
// use indirection to allow mocking
var volumeStructureToLocationMap = volumeStructureToLocationMapImpl
// volumeStructureToLocationMapImpl builds a map of gadget structures
// to locations and to matched disk structures. For the locations, the
// first key is the volume name, and the second key is the structure's
// index in the list of structures on that volume. The value is the
// StructureLocation that can actually be used to perform the
// lookup/update in applyUpdates. For the matched disk, the first key
// is the volume name and the second key is the yaml index of the
// structure in the gadget definition. The value is the disk structure
// that matches the gadget description.
func volumeStructureToLocationMapImpl(mod Model, oldVolumes, newVolumes map[string]*Volume) (
map[string]map[int]StructureLocation, map[string]map[int]*OnDiskStructure, error) {
// first try to load the disk-mapping.json volume trait info
volToDeviceMapping, err := LoadDiskVolumesDeviceTraits(dirs.SnapDeviceDir)
if err != nil {
return nil, nil, err
}
missingInitialMapping := false
// check if we had no mapping, if so then we try our best to build a mapping
// for the system-boot volume only to perform gadget asset updates there
// but if we fail to build a mapping, then on UC18 we non-fatally return
// without doing any updates, but on UC20 we fail the refresh because we
// expect UC20's gadget.yaml validation to be robust
if len(volToDeviceMapping) == 0 {
// then there was no mapping provided, this is a system which never
// performed the initial saving of disk/volume mapping info during
// install, so we build up a mapping specifically just for the
// volume with the system-boot role on it
// TODO: after we calculate this the first time should we save a new
// disk-mapping.json with this information and some marker that this
// was not calculated at first boot but a later date?
// TODO: the rest of this function in this case is technically not as
// efficient as we could be, since we build up these heuristics here and
// then immediately below treat them as if they were from the initial
// install boot and thus could have changed, but there is no way for
// this mapping to have changed between when this code runs here and the
// code below, but in the interest of sharing the same codepath for all
// cases below, we treat this heuristic mapping data the same
missingInitialMapping = true
var err error
volToDeviceMapping, err = buildNewVolumeToDeviceMapping(mod, oldVolumes, newVolumes)
if err != nil {
return nil, nil, err
}
// volToDeviceMapping should always be of length one
var volName string
for volName = range volToDeviceMapping {
break
}
// if there are multiple volumes leave a message that we are only
// performing updates for the volume with the system-boot role
if len(oldVolumes) != 1 {
logger.Noticef("WARNING: gadget has multiple volumes but updates are only being performed for volume %s", volName)
}
}
// now that we have some traits about the volume -> disk mapping, either
// because we just constructed it or that we were provided it the .json file
// we have to build up a map for the updaters to use to find the structure
// location to update given the VolumeStructure
return buildVolumeStructureToLocation(
mod,
oldVolumes,
newVolumes,
volToDeviceMapping,
missingInitialMapping,
)
}
func validateVolumesMatch(old, new map[string]*Volume) error {
oldVolumes := make([]string, 0, len(old))
newVolumes := make([]string, 0, len(new))
for oldVol := range old {
oldVolumes = append(oldVolumes, oldVol)
}
for newVol := range new {
newVolumes = append(newVolumes, newVol)
}
common := strutil.Intersection(newVolumes, oldVolumes)
// check dissimilar cases between common, new and old
switch {
case len(common) != len(newVolumes) && len(common) != len(oldVolumes):
// there are both volumes removed from old and volumes added to new
return fmt.Errorf("cannot update gadget assets: volumes were both added and removed")
case len(common) != len(newVolumes):
// then there are volumes in old that are not in new, i.e. a volume
// was removed
return fmt.Errorf("cannot update gadget assets: volumes were removed")
case len(common) != len(oldVolumes):
// then there are volumes in new that are not in old, i.e. a volume
// was added
return fmt.Errorf("cannot update gadget assets: volumes were added")
}
// check things like assigned device-path switching
// at this point here we can assume the lists are identical
for name, cvol := range old {
// the new one must match
nvol := new[name]
if cvol.AssignedDevice != nvol.AssignedDevice {
return fmt.Errorf("cannot update gadget assets: device assignment is not identical for %q", name)
}
}
return nil
}
// Update applies the gadget update given the gadget information and data from
// old and new revisions. It errors out when the update is not possible or
// illegal, or a failure occurs at any of the steps. When there is no update, a
// special error ErrNoUpdate is returned.
//
// Only structures selected by the update policy are part of the update. When
// the policy is nil, a default one is used. The default policy selects
// structures in an opt-in manner, only structures with a higher value of Edition
// field in the new gadget definition are part of the update.
//
// Data that would be modified during the update is first backed up inside the
// rollback directory. Should the apply step fail, the modified data is
// recovered.
//
// The rules for gadget/kernel updates with "$kernel:refs":
//
// 1. When installing a kernel with assets that have "update: true"
// there *must* be a matching entry in gadget.yaml. If not we risk
// bricking the system because the kernel tells us that it *needs*
// this file to boot but without gadget.yaml we would not put it
// anywhere.
// 2. When installing a gadget with "$kernel:ref" content it is okay
// if this content cannot get resolved as long as there is no
// "edition" jump. This means adding new "$kernel:ref" without
// "edition" updates is always possible.
//
// To add a new "$kernel:ref" to gadget/kernel:
// a. Update gadget and gadget.yaml and add "$kernel:ref" but do not update
// edition (if edition update is needed, use epoch)
// b. Update kernel and kernel.yaml with new assets.
// c. snapd will refresh gadget (see rule 2) but refuse to take the new
// kernel (rule 1)
// d. After step (c) is completed the kernel refresh will now also work (no more
// violation of rule 1)
func Update(model Model, old, new GadgetData, rollbackDirPath string, updatePolicy UpdatePolicyFunc, observer ContentUpdateObserver) error {
// The gadget can only match if they have identical volumes assigned for the
// (currently) matching device
oldVolumes, _, err := VolumesForCurrentDevice(old.Info)
if err != nil {
return fmt.Errorf("cannot update gadget assets: %v", err)
}
newVolumes, _, err := VolumesForCurrentDevice(new.Info)
if err != nil {
return fmt.Errorf("cannot update gadget assets: %v", err)
}
// if the volumes from the old and the new gadgets do not match, then fail -
// we don't support adding or removing volumes from the gadget.yaml
if err := validateVolumesMatch(oldVolumes, newVolumes); err != nil {
return err
}
if updatePolicy == nil {
updatePolicy = defaultPolicy
}
// collect the updates and validate that they are doable from an abstract
// sense first
// note that this code is written such that before we perform any update, we
// validate that all updates are valid and that all volumes are compatible
// between the old and the new state, this is to prevent applying valid
// updates on one volume when another volume is invalid, if that's the case
// we treat the whole gadget as invalid and return an error blocking the
// refresh
// TODO: should we handle the updates on multiple volumes in a
// deterministic order? iterating over maps is not deterministic, but we
// perform all updates at the end together in one call
// ensure all required kernel assets are found in the gadget
kernelInfo, err := kernel.ReadInfo(new.KernelRootDir)
if err != nil {
return err
}
allKernelAssets := []string{}
for assetName, asset := range kernelInfo.Assets {
if !asset.Update {
continue
}
allKernelAssets = append(allKernelAssets, assetName)
}
atLeastOneKernelAssetConsumed := false
// build the map of volume structures to locations and of disk strucutures
structureLocations, volToPartsMap, err := volumeStructureToLocationMap(model, oldVolumes, newVolumes)
if err != nil {
if err == errSkipUpdateProceedRefresh {
// we couldn't successfully build a map for the structure locations,
// but for various reasons this isn't considered a fatal error for
// the gadget refresh, so just return nil instead, a message should
// already have been logged
return nil
}
return err
}
// Layout new volume, delay resolving of filesystem content
opts := &LayoutOptions{
SkipResolveContent: true,
GadgetRootDir: new.RootDir,
KernelRootDir: new.KernelRootDir,
}
allUpdates := []updatePair{}
laidOutVols := map[string]*LaidOutVolume{}
for volName, oldVol := range oldVolumes {
newVol := newVolumes[volName]
// layout old partially, without going deep into the layout of structure
// content
pOld, err := layoutVolumePartially(oldVol, volToPartsMap[volName])
if err != nil {
return fmt.Errorf("cannot lay out the old volume %s: %v", volName, err)
}
pNew, err := LayoutVolume(newVol, volToPartsMap[volName], opts)
if err != nil {
return fmt.Errorf("cannot lay out the new volume %s: %v", volName, err)
}
laidOutVols[volName] = pNew
if err := canUpdateVolume(pOld, pNew); err != nil {
return fmt.Errorf("cannot apply update to volume %s: %v", volName, err)
}
// if we haven't consumed any kernel assets yet check if this volume
// consumes at least one - we require at least one asset to be consumed
// by some volume in the gadget
if !atLeastOneKernelAssetConsumed {
consumed, err := gadgetVolumeKernelUpdateAssetsConsumed(pNew.Volume, kernelInfo)
if err != nil {
return err
}
atLeastOneKernelAssetConsumed = consumed
}
// now we know which structure is which, find which ones need an update
updates, err := resolveUpdate(pOld, pNew, updatePolicy, new.RootDir, new.KernelRootDir, kernelInfo)
if err != nil {
return err
}
// can update old layout to new layout
for _, update := range updates {
fromIdx, err := oldVol.yamlIdxToStructureIdx(update.from.VolumeStructure.YamlIndex)
if err != nil {
return err
}
toIdx, err := oldVol.yamlIdxToStructureIdx(update.from.VolumeStructure.YamlIndex)
if err != nil {
return err
}
if err := canUpdateStructure(oldVol, fromIdx, newVol, toIdx); err != nil {
return fmt.Errorf("cannot update volume structure %v for volume %s: %v", update.to, volName, err)
}
}
// collect updates per volume into a single set of updates to perform
// at once
allUpdates = append(allUpdates, updates...)
}
// check if there were kernel assets that at least one was consumed across
// any of the volumes
if len(allKernelAssets) != 0 && !atLeastOneKernelAssetConsumed {
sort.Strings(allKernelAssets)
return fmt.Errorf("gadget does not consume any of the kernel assets needing synced update %s", strutil.Quoted(allKernelAssets))
}
if len(allUpdates) == 0 {
// nothing to update
return ErrNoUpdate
}
if len(newVolumes) != 1 {
logger.Debugf("gadget asset update routine for multiple volumes")
// check if the structure location map has only one volume in it - this
// is the case in legacy update operations where we only support updates
// to the system-boot / main volume
if len(structureLocations) == 1 {
// log a message and drop all updates to structures not in the
// volume we have
supportedVolume := ""
for volName := range structureLocations {
supportedVolume = volName
}
keepUpdates := make([]updatePair, 0, len(allUpdates))
for _, update := range allUpdates {
if update.volume.Name != supportedVolume {
// TODO: or should we error here instead?
logger.Noticef("skipping update on non-supported volume %s to structure %s", update.volume.Name, update.to.Name())
} else {
keepUpdates = append(keepUpdates, update)
}
}
allUpdates = keepUpdates
}
}
// apply all updates at once
if err := applyUpdates(structureLocations, new, allUpdates, rollbackDirPath, observer); err != nil {
return err
}
return nil
}
func resolveVolume(old *Info, new *Info) (oldVol, newVol *Volume, err error) {
// support only one volume
if len(new.Volumes) != 1 || len(old.Volumes) != 1 {
return nil, nil, errors.New("cannot update with more than one volume")
}
var name string
for n := range old.Volumes {
name = n
break
}
oldV := old.Volumes[name]
newV, ok := new.Volumes[name]
if !ok {
return nil, nil, fmt.Errorf("cannot find entry for volume %q in updated gadget info", name)
}
return oldV, newV, nil
}
func isLegacyMBRTransition(from *VolumeStructure, to *VolumeStructure) bool {
// legacy MBR could have been specified by setting type: mbr, with no
// role
return from.Type == schemaMBR && to.Role == schemaMBR
}
func effectivePartSize(part *VolumeStructure) quantity.Size {
// Partitions with partial size are set as unbounded (their Size field is 0)
if part.hasPartialSize() {
return UnboundedStructureSize
}
return part.Size
}
func arePossibleSizesCompatible(from *VolumeStructure, to *VolumeStructure) bool {
// Check if [from.MinSize,from.Size], the interval of sizes allowed in
// "from", intersects with [to.MinSize,to.Size] (the interval of sizes
// allowed in "to"). When both checks are true we know some overlap
// between the segments is happening (that this is right can be
// visualized by sliding a segment over the abscissa while the other is
// fixed, for a moving segment either smaller or bigger than the fixed
// one).
return effectivePartSize(from) >= to.MinSize && from.MinSize <= effectivePartSize(to)
}
func arePossibleOffsetsCompatible(vss1 []VolumeStructure, idx1 int, vss2 []VolumeStructure, idx2 int) bool {
// See comment in arePossibleSizesCompatible, this is the same check but
// for offsets instead of sizes.
return maxStructureOffset(vss1, idx1) >= minStructureOffset(vss2, idx2) &&
minStructureOffset(vss1, idx1) <= maxStructureOffset(vss2, idx2)
}
func arePartitionTypesCompatible(from, to *VolumeStructure) bool {
// As far as there is an intersection on the possible types we are fine
fromTs := strings.Split(from.Type, ",")
toTs := strings.Split(to.Type, ",")
for _, tp := range fromTs {
if strutil.ListContains(toTs, tp) {
return true
}
}
return isLegacyMBRTransition(from, to)
}
// canUpdateStructure checks gadget compatibility on updates, looking only at
// features that are not reflected on the installed disk (for this we check
// elsewhere the new gadget against the actual disk content).
//
// Partial properties are not checked as they will be checked against the real
// disk later, in EnsureVolumeCompatibility. TODO Some checks should maybe
// happen only there even for non-partial gadgets.
func canUpdateStructure(fromV *Volume, fromIdx int, toV *Volume, toIdx int) error {
from := &fromV.Structure[fromIdx]
to := &toV.Structure[toIdx]
if !toV.HasPartial(PartialSchema) && toV.Schema == schemaGPT && from.Name != to.Name {
// partition names are only effective when GPT is used
return fmt.Errorf("cannot change structure name from %q to %q",
from.Name, to.Name)
}
if !arePossibleSizesCompatible(from, to) {
return fmt.Errorf("new valid structure size range [%v, %v] is not compatible with current ([%v, %v])",
to.MinSize, effectivePartSize(to), from.MinSize, effectivePartSize(from))
}
if !arePossibleOffsetsCompatible(fromV.Structure, fromIdx, toV.Structure, toIdx) {
return fmt.Errorf("new valid structure offset range [%v, %v] is not compatible with current ([%v, %v])",
minStructureOffset(toV.Structure, toIdx), maxStructureOffset(toV.Structure, toIdx), minStructureOffset(fromV.Structure, fromIdx), maxStructureOffset(fromV.Structure, fromIdx))
}
if from.Role != to.Role {
return fmt.Errorf("cannot change structure role from %q to %q",
from.Role, to.Role)
}
if !arePartitionTypesCompatible(from, to) {
return fmt.Errorf("cannot change structure type from %q to %q",
from.Type, to.Type)
}
if from.ID != to.ID {
return fmt.Errorf("cannot change structure ID from %q to %q", from.ID, to.ID)
}
if to.HasFilesystem() {
if !from.HasFilesystem() {
return fmt.Errorf("cannot change a bare structure to filesystem one")
}
// If partial filesystem we have an empty string. Here we allow
// moving from undefined filesystem to defined one, but not from
// defined to undefined, or changing defined filesystem.
if from.Filesystem != "" && from.Filesystem != to.Filesystem {
return fmt.Errorf("cannot change filesystem from %q to %q",
from.Filesystem, to.Filesystem)
}
if from.Label != to.Label {
return fmt.Errorf("cannot change filesystem label from %q to %q",
from.Label, to.Label)
}
} else {
if from.HasFilesystem() {
return fmt.Errorf("cannot change a filesystem structure to a bare one")
}
}
return nil
}
func canUpdateVolume(from *PartiallyLaidOutVolume, to *LaidOutVolume) error {
if from.ID != to.ID {
return fmt.Errorf("cannot change volume ID from %q to %q", from.ID, to.ID)
}
if err := checkCompatibleSchema(from.Volume, to.Volume); err != nil {
return err
}
if len(from.LaidOutStructure) != len(to.LaidOutStructure) {
return fmt.Errorf("cannot change the number of structures within volume from %v to %v", len(from.LaidOutStructure), len(to.LaidOutStructure))
}
return nil
}
type updatePair struct {
from *LaidOutStructure
to *LaidOutStructure
volume *Volume
}
func defaultPolicy(from, to *LaidOutStructure) (bool, ResolvedContentFilterFunc) {
return to.VolumeStructure.Update.Edition > from.VolumeStructure.Update.Edition, nil
}
// RemodelUpdatePolicy implements the update policy of a remodel scenario. The
// policy selects all non-MBR structures for the update.
func RemodelUpdatePolicy(from, to *LaidOutStructure) (bool, ResolvedContentFilterFunc) {
if from.Role() == schemaMBR {
return false, nil
}
return true, nil
}
// KernelUpdatePolicy implements the update policy for kernel asset updates.
//
// This is called when there is a kernel->kernel refresh for kernels that
// contain bootloader assets. In this case all bootloader assets that are
// marked as "update: true" in the kernel.yaml need updating.
//
// But any non-kernel assets need to be ignored, they will be handled by
// the regular gadget->gadget update mechanism and policy.
func KernelUpdatePolicy(from, to *LaidOutStructure) (bool, ResolvedContentFilterFunc) {
// The policy function has to work on unresolved content, the
// returned filter will make sure that after resolving only the
// relevant $kernel:refs are updated.
for _, ct := range to.VolumeStructure.Content {
if strings.HasPrefix(ct.UnresolvedSource, "$kernel:") {
return true, func(rn *ResolvedContent) bool {
return rn.KernelUpdate
}
}
}
return false, nil
}
func resolveUpdate(oldVol *PartiallyLaidOutVolume, newVol *LaidOutVolume, policy UpdatePolicyFunc, newGadgetRootDir, newKernelRootDir string, kernelInfo *kernel.Info) (updates []updatePair, err error) {
if len(oldVol.LaidOutStructure) != len(newVol.LaidOutStructure) {
return nil, errors.New("internal error: the number of structures in new and old volume definitions is different")
}
// We must order updates from the latest binary in the boot
// chain to the newest. So any seed partitions should come
// after boot partitions.
var seedUpdates []updatePair
var bootUpdates []updatePair
for j, oldStruct := range oldVol.LaidOutStructure {
newStruct := newVol.LaidOutStructure[j]
updatesTarget := &updates
if strings.HasPrefix(newStruct.Role(), "system-seed") {
updatesTarget = &seedUpdates
} else if strings.HasPrefix(newStruct.Role(), "system-boot") {
updatesTarget = &bootUpdates
}
// update only when the policy says so; boot assets
// are assumed to be backwards compatible, once
// deployed they are not rolled back or replaced unless
// told by the new policy
if update, filter := policy(&oldStruct, &newStruct); update {
// Ensure content is resolved and filtered. Filtering
// is required for e.g. KernelUpdatePolicy, see above.
resolvedContent, err := resolveVolumeContent(newGadgetRootDir, newKernelRootDir, kernelInfo, newStruct.VolumeStructure, filter)
if err != nil {
return nil, err
}
// No resolved or raw content that would need updating
if len(resolvedContent) == 0 && len(newStruct.LaidOutContent) == 0 {
continue
}
newVol.LaidOutStructure[j].ResolvedContent = resolvedContent
// and add to updates
*updatesTarget = append(*updatesTarget, updatePair{
from: &oldVol.LaidOutStructure[j],
to: &newVol.LaidOutStructure[j],
volume: newVol.Volume,
})
}
}
updates = append(updates, bootUpdates...)
updates = append(updates, seedUpdates...)
return updates, nil
}
type Updater interface {
// Update applies the update or errors out on failures. When no actual
// update was applied because the new content is identical a special
// ErrNoUpdate is returned.
Update() error
// Backup prepares a backup copy of data that will be modified by
// Update()
Backup() error
// Rollback restores data modified by update
Rollback() error
}
func updateLocationForStructure(structureLocations map[string]map[int]StructureLocation, ps *LaidOutStructure) (loc StructureLocation, err error) {
loc, ok := structureLocations[ps.VolumeStructure.VolumeName][ps.VolumeStructure.YamlIndex]
if !ok {
return loc, fmt.Errorf("structure with index %d on volume %s not found", ps.VolumeStructure.YamlIndex, ps.VolumeStructure.VolumeName)
}
if !ps.HasFilesystem() {
if loc.Device == "" {
return loc, fmt.Errorf("internal error: structure %d on volume %s should have had a device set but did not have one in an internal mapping", ps.VolumeStructure.YamlIndex, ps.VolumeStructure.VolumeName)
}
return loc, nil
} else {
if loc.RootMountPoint == "" {
// then we can't update this structure because it has a filesystem
// specified in the gadget.yaml, but that partition is not mounted
// anywhere writable for us to update the filesystem content
// there is a TODO in buildVolumeStructureToLocation above about
// possibly mounting it, we could also mount it here instead and
// then proceed with the update, but we should also have a way to
// unmount it when we are done updating it
return loc, fmt.Errorf("structure %d on volume %s does not have a writable mountpoint in order to update the filesystem content", ps.VolumeStructure.YamlIndex, ps.VolumeStructure.VolumeName)
}
return loc, nil
}
}
func applyUpdates(structureLocations map[string]map[int]StructureLocation, new GadgetData, updates []updatePair, rollbackDir string, observer ContentUpdateObserver) error {
updaters := make([]Updater, len(updates))
for i, one := range updates {
loc, err := updateLocationForStructure(structureLocations, one.to)
if err != nil {
return fmt.Errorf("cannot prepare update for volume structure %v on volume %s: %v", one.to, one.volume.Name, err)
}
up, err := updaterForStructure(loc, one.from, one.to, new.RootDir, rollbackDir, observer)
if err != nil {
return fmt.Errorf("cannot prepare update for volume structure %v on volume %s: %v", one.to, one.volume.Name, err)
}
updaters[i] = up
}
var backupErr error
for i, one := range updaters {
if err := one.Backup(); err != nil {
backupErr = fmt.Errorf("cannot backup volume structure %v on volume %s: %v", updates[i].to, updates[i].volume.Name, err)
break
}
}
if backupErr != nil {
if observer != nil {
if err := observer.Canceled(); err != nil {
logger.Noticef("cannot observe canceled prepare update: %v", err)
}
}
return backupErr
}
if observer != nil {
if err := observer.BeforeWrite(); err != nil {
return fmt.Errorf("cannot observe prepared update: %v", err)
}
}
// Inject fault during update of boot assets
osutil.MaybeInjectFault("update-boot-assets")
var updateErr error
var updateLastAttempted int
var skipped int
for i, one := range updaters {
updateLastAttempted = i
if err := one.Update(); err != nil {
if err == ErrNoUpdate {
skipped++
continue
}
updateErr = fmt.Errorf("cannot update volume structure %v on volume %s: %v", updates[i].to, updates[i].volume.Name, err)
break
}
}
if skipped == len(updaters) {
// all updates were a noop
return ErrNoUpdate
}
if updateErr == nil {
// all good, updates applied successfully
return nil
}
logger.Noticef("cannot update gadget: %v", updateErr)
// not so good, rollback ones that got applied
for i := 0; i <= updateLastAttempted; i++ {
one := updaters[i]
if err := one.Rollback(); err != nil {
// TODO: log errors to oplog
logger.Noticef("cannot rollback volume structure %v update on volume %s: %v", updates[i].to, updates[i].volume.Name, err)
}
}
if observer != nil {
if err := observer.Canceled(); err != nil {
logger.Noticef("cannot observe canceled update: %v", err)
}
}
return updateErr
}
var updaterForStructure = updaterForStructureImpl
func updaterForStructureImpl(loc StructureLocation, fromPs *LaidOutStructure, ps *LaidOutStructure, newRootDir, rollbackDir string, observer ContentUpdateObserver) (Updater, error) {
// TODO: this is sort of clunky, we already did the lookup, but doing the
// lookup out of band from this function makes for easier mocking
if !ps.HasFilesystem() {
lookup := func(ps *LaidOutStructure) (device string, offs quantity.Offset, err error) {
return loc.Device, loc.Offset, nil
}
return newRawStructureUpdater(newRootDir, ps, rollbackDir, lookup)
} else {
lookup := func(ps *LaidOutStructure) (string, error) {
return loc.RootMountPoint, nil
}
return newMountedFilesystemUpdater(fromPs, ps, rollbackDir, lookup, observer)
}
}
// MockUpdaterForStructure replace internal call with a mocked one, for use in tests only
func MockUpdaterForStructure(mock func(loc StructureLocation, fromPs, ps *LaidOutStructure, rootDir, rollbackDir string, observer ContentUpdateObserver) (Updater, error)) (restore func()) {
old := updaterForStructure
updaterForStructure = mock
return func() {
updaterForStructure = old
}
}
|