File: index.md.txt

package info (click to toggle)
petsc 3.22.5%2Bdfsg1-2
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 516,740 kB
  • sloc: ansic: 814,333; cpp: 50,948; python: 37,416; f90: 17,187; javascript: 3,493; makefile: 3,198; sh: 1,502; xml: 619; objc: 445; java: 13; csh: 1
file content (952 lines) | stat: -rw-r--r-- 50,619 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
---
orphan: true
---

(2023_meeting)=

# 2023 Annual PETSc Meeting

```{image} https://petsc.gitlab.io/annual-meetings/2023/GroupPhoto.jpg
:alt: PETSc User Meeting 2023 group photo (Hermann Hall, 06/06/2023)
:width: 800
```

June 5-7, 2023, at the [Hermann Hall Conference Center](https://www.iit.edu/event-services/meeting-spaces/hermann-hall-conference-center)
in the Hermann Ballroom (when you enter the Hermann Hall building through the main entrance walk straight back to the rear of the building and take a right)
(3241 South Federal Street, Chicago, IL)
on the campus of [The Illinois Institute of Technology (IIT)](https://www.iit.edu) in Chicago.
Easy access from the hotels via the Chicago Elevated [Green](https://www.transitchicago.com/greenline) or [Red](https://www.transitchicago.com/redline) Lines.
[Parking use B5 (32nd & Federal St.)](https://www.iit.edu/cbsc/parking/visitor-and-event-parking).

Please test for Covid before attending the meeting and
mask while traveling to the meeting.

In addition to a newbie user tutorial and a {any}`newbie_developer_workshop`, the meeting will include a "speed dating" session where users can ask questions of developers (and each other) about technical details of their particular simulations. Finally, the meeting will be interspersed with mini-tutorials that will dive into particular aspects of PETSc that users may not be familiar with.

## Meeting times

- Monday, June 5: 1 pm to 5:30 pm
- Tuesday, June 6: 10:15 am to 5:30 pm
- Wednesday, June 7: 9 am to 3 pm

PETSc newbie user lightning tutorial:

- Monday, June 5: 10 am to 12 pm

PETSc {any}`newbie_developer_workshop`

- Tuesday, June 6: 9 am to 10 am

## Registration

Please register at [EventBrite](https://www.eventbrite.com/e/petsc-2023-user-meeting-tickets-494165441137) to save your seat. 100-dollar registration fee for breaks and lunches; this can be skipped if you cannot afford it.

## Submit a presentation

[Submit an abstract](https://docs.google.com/forms/d/e/1FAIpQLSesh47RGVb9YD9F1qu4obXSe1X6fn7vVmjewllePBDxBItfOw/viewform) by May 1st (but preferably now) to be included in the schedule. We welcome talks from all perspectives, including those who

- contribute to PETSc,
- use PETSc in their applications or libraries,
- develop the libraries and packages [called from PETSc](https://petsc.org/release/install/external_software/), and even
- those who are curious about using PETSc in their applications.

## Suggested hotels

- [Receive IIT hotel discounts.](https://www.iit.edu/procurement-services/purchasing/preferred-and-contract-vendors/hotels)

- More Expensive

  - [Hilton Chicago](https://www.hilton.com/en/hotels/chichhh-hilton-chicago/?SEO_id=GMB-AMER-HI-CHICHHH&y_source=1_NzIxNzU2LTcxNS1sb2NhdGlvbi53ZWJzaXRl) 720 S Michigan Ave, Chicago
  - [Hotel Blake, an Ascend Hotel Collection Member](https://www.choicehotels.com/illinois/chicago/ascend-hotels/il480) 500 S Dearborn St, Chicago, IL 60605
  - [The Blackstone, Autograph Collection](https://www.marriott.com/en-us/hotels/chiab-the-blackstone-autograph-collection/overview/?scid=f2ae0541-1279-4f24-b197-a979c79310b0) 636 South Michigan Avenue Lobby Entrance On, E Balbo Dr, Chicago

- Inexpensive

  - [Travelodge by Wyndham Downtown Chicago](https://www.wyndhamhotels.com/travelodge/chicago-illinois/travelodge-hotel-downtown-chicago/overview?CID=LC:TL::GGL:RIO:National:10073&iata=00093796) 65 E Harrison St, Chicago
  - [The Congress Plaza Hotel & Convention Center](https://www.congressplazahotel.com/?utm_source=local-directories&utm_medium=organic&utm_campaign=travelclick-localconnect) 520 S Michigan Ave, Chicago
  - [Hilton Garden Inn Chicago Downtown South Loop](https://www.hilton.com/en/hotels/chidlgi-hilton-garden-inn-chicago-downtown-south-loop/?SEO_id=GMB-AMER-GI-CHIDLGI&y_source=1_MTI2NDg5NzktNzE1LWxvY2F0aW9uLndlYnNpdGU%3D) 55 E 11th St, Chicago

## Agenda

### Monday, June 5

| Time     | Title                                                                                                                        | Speaker                 |
| -------- | ---------------------------------------------------------------------------------------------------------------------------- | ----------------------- |
| 10:00 am | Newbie tutorial ([Slides][s_00], [Video][v_00])                                                                              |                         |
| 11:30 am | Follow-up questions and meetings                                                                                             |                         |
| 12:00 am | **Lunch** for tutorial attendees and early arrivees                                                                          |                         |
| 1:00 pm  | Some thoughts on the future of PETSc ([Slides][s_01], [Video][v_01])                                                         | [Barry Smith]           |
| 1:30 pm  | A new nonhydrostatic capability for MPAS-Ocean ([Slides][s_02], [Video][v_02])                                               | [Sara Calandrini]       |
| 2:00 pm  | MultiFlow: A coupled balanced-force framework to solve multiphase flows in arbitrary domains ([Slides][s_03], [Video][v_03]) | [Berend van Wachem]     |
| 2:30 pm  | Mini tutorial: PETSc and PyTorch interoperability ([Slides][s_04], [Video][v_04], [IPython code][c_04])                      | [Hong Zhang (Mr.)]      |
| 2:45 pm  | **Coffee Break**                                                                                                             |                         |
| 3:00 pm  | Towards enabling digital twins capabilities for a cloud chamber (slides and video unavailable)                               | [Vanessa Lopez-Marrero] |
| 3:30 pm  | PETSc ROCKS ([Slides][s_06], [Video][v_06])                                                                                  | [David May]             |
| 4:00 pm  | Software Development and Deployment Including PETSc ([Slides][s_07], [Video][v_07])                                          | [Tim Steinhoff]         |
| 4:30 pm  | Multiscale, Multiphysics Simulation Through Application Composition Using MOOSE ([Slides][s_08], [Video][v_08])              | [Derek Gaston]          |
| 5:00 pm  | PETSc Newton Trust-Region for Simulating Large-scale Engineered Subsurface Systems with PFLOTRAN ([Slides][s_09])            | [Heeho Park]            |
| 5:30 pm  | End of first day                                                                                                             |                         |

### Tuesday, June 6

| Time     | Title                                                                                                                                                   | Speaker                  |
| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ |
|          |                                                                                                                                                         |                          |
| 9:00 am  | Newbie Developer Workshop (optional)                                                                                                                    |                          |
| 10:00 am | **Coffee Break**                                                                                                                                        |                          |
| 10:15 am | Experiences in solving nonlinear eigenvalue problems with SLEPc ([Slides][s_10], [Video][v_10])                                                         | [Jose E. Roman]          |
| 10:45 am | MPI Multiply Threads ([Slides][s_11], [Video][v_11])                                                                                                    | [Hui Zhou]               |
| 11:15 am | Mini tutorial: PETSc on the GPU ([Slides][s_12], [Video][v_12])                                                                                         | [Junchao Zhang]          |
| 11:30 am | AMD GPU benchmarking, documentation, and roadmap ([Slides][s_13], video unavailable)                                                                    | [Justin Chang]           |
| 12:00 pm | **Lunch**                                                                                                                                               |                          |
| 1:00 pm  | Mini tutorial: petsc4py ([Slides][s_14], [Video][v_14])                                                                                                 | [Stefano Zampini]        |
| 1:15 pm  | Transparent Asynchronous Compute Made Easy With PETSc ([Slides][s_15], [Video][v_15])                                                                   | [Jacob Faibussowitsch]   |
| 1:45 pm  | Using Kokkos Ecosystem with PETSc on modern architectures ([Slides][s_16])                                                                              | [Luc Berger-Vergiat]     |
| 2:15 pm  | Intel oneAPI Math Kernel Library, what’s new and what’s next? ([Slides][s_17], [Video][v_17])                                                           | [Spencer Patty]          |
| 2:45 pm  | Mini tutorial: DMPlex ([Video][v_18], slides unavailable)                                                                                               | [Matt Knepley]           |
| 3:00 pm  | **Coffee Break**                                                                                                                                        |                          |
| 3:15 pm  | Scalable cloud-native thermo-mechanical solvers using PETSc (slides and video unavailable)                                                              | [Ashish Patel]           |
| 3:45 pm  | A mimetic finite difference based quasi-static magnetohydrodynamic solver for force-free plasmas in tokamak disruptions ([Slides][s_20], [Video][v_20]) | [Zakariae Jorti]         |
| 4:15 pm  | High-order FEM implementation in AMReX using PETSc ([Slides][s_21], [Video][v_21])                                                                      | [Alex Grant]             |
| 4:45 pm  | An Immersed Boundary method for Elastic Bodies Using PETSc ([Slides][s_22], [Video][v_22])                                                              | [Mohamad Ibrahim Cheikh] |
| 5:15 pm  | Mini tutorial: DMNetwork ([Slides][s_23], [Video][v_23])                                                                                                | [Hong Zhang (Ms.)]       |
| 5:30 pm  | End of second day                                                                                                                                       |                          |

### Wednesday, June 7

| Time     | Title                                                                                                                               | Speaker                             |
| -------- | ----------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- |
| 9:00 am  | XGCm: An Unstructured Mesh Gyrokinetic Particle-in-cell Code for Exascale Fusion Plasma Simulations ([Slides][s_24], [Video][v_24]) | [Chonglin Zhang]                    |
| 9:30 am  | PETSc-PIC: A Structure-Preserving Particle-In-Cell Method for Electrostatic Solves ([Slides][s_25], [Video][v_25])                  | [Daniel Finn]                       |
| 9:57 am  | Landau Collisions in the Particle Basis with PETSc-PIC ([Slides][s_26], [Video][v_26])                                              | [Joseph Pusztay]                    |
| 10:15 am | **Coffee Break**                                                                                                                    |                                     |
| 10:30 am | Mini tutorial: DMSwarm ([Slides][s_27], [Video][v_27])                                                                              | [Joseph Pusztay\*][joseph pusztay*] |
| 10:45 am | Scalable Riemann Solvers with the Discontinuous Galerkin Method for Hyperbolic Network Simulation ([Slides][s_28], [Video][v_28])   | [Aidan Hamilton]                    |
| 11:15 am | Numerical upscaling of network models using PETSc ([Slides][s_29], [Video][v_29])                                                   | [Maria Vasilyeva]                   |
| 11:45 am | Mini tutorial: TaoADMM ([Slides][s_30], [Video][v_30])                                                                              | [Hansol Suh]                        |
| 12:00 am | **Lunch**                                                                                                                           |                                     |
| 1:00 pm  | PETSc in the Ionosphere ([Slides][s_31], [Video][v_31])                                                                             | [Matt Young]                        |
| 1:30 pm  | From the trenches: porting mef90 ([Slides][s_32], [Video][v_32])                                                                    | [Blaise Bourdin]                    |
| 2:00 pm  | PERMON library for quadratic programming ([Slides][s_33], [Video][v_33])                                                            | [Jakub Kruzik]                      |
| 2:22 pm  | Distributed Machine Learning for Natural Hazard Applications Using PERMON ([Slides][s_34], [Video][v_34])                           | [Marek Pecha]                       |
| 2:45 pm  | Wrap up                                                                                                                             |                                     |
| 3:00 pm  | End of meeting                                                                                                                      |                                     |

(newbie_developer_workshop)=

## Newbie Developer Workshop

Tuesday, June 6, at 9 am. Some of the topics to be covered.

- {any}`Exploring the developer documentation<ind_developers>`

- {any}`petsc_developers_communication_channels`

- {any}`PETSc Git branch organization<sec_integration_branches>`

- {any}`ch_contributing`

  - {any}`Starting a merge request (MR)<ch_developingmr>`
  - {any}`Submitting and monitoring a MR<ch_submittingmr>`
  - {any}`GitLab CI pipelines<pipelines>`
  - {any}`PETSc style guide<style>`

- Reviewing someone else's MR

- Adding new Fortran and Python function bindings

- PETSc's

  - {any}`configure system<ch_buildsystem>`
  - compiler system, and
  - {any}`testing system including the GitLab CI<test_harness>`

- Any other topics requested by potential contributors

## Abstracts

(luc-berger-vergiat)=

:::{topic} **Using Kokkos Ecosystem with PETSc on modern architectures**
**Luc Berger-Vergiat**

Sandia National Laboratories

Supercomputers increasingly rely on GPUs to achieve high
throughput while maintaining a reasonable power consumption. Consequently,
scientific applications are adapting to this new environment, and new
algorithms are designed to leverage the high concurrency of GPUs. In this
presentation, I will show how the Kokkos Ecosystem can help alleviate some
of the difficulties associated with support for multiple CPU/GPU
architectures. I will also show some results using the Kokkos and Kokkos
kernels libraries with PETSc on modern architectures.
:::

(blaise-bourdin)=

:::{topic} **From the trenches: porting mef90**
**Blaise Bourdin**

McMaster University

mef90 is a distributed three-dimensional unstructured finite-element
implementation of various phase-field models of fracture. In this talk,
I will share the experience gained while porting mef90 from petsc 3.3 to 3.18.
:::

(sara-calandrini)=

:::{topic} **A new non-hydrostatic capability for MPAS-Ocean**
**Sara Calandrini**

, Darren Engwirda, Luke Van Roekel

Los Alamos National Laboratory

The Model for Prediction Across Scales-Ocean (MPAS-Ocean) is an
open-source, global ocean model and is one component within the Department
of Energy’s E3SM framework, which includes atmosphere, sea ice, and
land-ice models. In this work, a new formulation for the ocean model is
presented that solves the non-hydrostatic, incompressible Boussinesq
equations on unstructured meshes. The introduction of this non-hydrostatic
capability is necessary for the representation of fine-scale dynamical
processes, including resolution of internal wave dynamics and large eddy
simulations. Compared to the standard hydrostatic formulation,
a non-hydrostatic pressure solver and a vertical momentum equation are
added, where the PETSc (Portable Extensible Toolkit for Scientific
Computation) library is used for the inversion of a large sparse system for
the nonhydrostatic pressure. Numerical results comparing the solutions of
the hydrostatic and non-hydrostatic models are presented, and the parallel
efficiency and accuracy of the time-stepper are evaluated.
:::

(justin-chang)=

:::{topic} **AMD GPU benchmarking, documentation, and roadmap**
**Justin Chang**

AMD Inc.

This talk comprises three parts. First, we present an overview of some
relatively new training documentation like the "AMD lab notes" to enable
current and potential users of AMD GPUs into getting the best experience
out of their applications or algorithms. Second, we briefly discuss
implementation details regarding the PETSc HIP backend introduced into the
PETSc library late last year and present some performance benchmarking data
on some of the AMD hardware. Lastly, we give a preview of the upcoming
MI300 series APU and how software developers can prepare to leverage this
new type of accelerator.
:::

(mohamad-ibrahim-cheikh)=

:::{topic} **An Immersed Boundary method for Elastic Bodies Using PETSc**
**Mohamad Ibrahim Cheikh**

, Konstantin Doubrovinski

Doubrovinski Lab, The University of Texas Southwestern Medical Center

This study presents a parallel implementation of an immersed boundary
method code using the PETSc distributed memory module. This work aims to simulate a complex developmental process that occurs in the
early stages of embryonic development, which involves the transformation of
the embryo into a multilayered and multidimensional structure. To
accomplish this, the researchers used the PETSc parallel module to solve
a linear system for the Eulerian fluid dynamics while simultaneously
coupling it with a deforming Lagrangian elastic body to model the
deformable embryonic tissue. This approach allows for a detailed simulation
of the interaction between the fluid and the tissue, which is critical for
accurately modeling the developmental process. Overall, this work
highlights the potential of the immersed boundary method and parallel
computing techniques for simulating complex physical phenomena.
:::

(jacob-faibussowitsch)=

:::{topic} **Transparent Asynchronous Compute Made Easy With PETSc**
**Jacob Faibussowitch**

Argonne National Laboratory

Asynchronous GPU computing has historically been difficult to integrate scalably at the library level. We provide an update on recent work
implementing a fully asynchronous framework in PETSc. We give detailed
performance comparisons and provide a demo to showcase the proposed model's effectiveness
and ease of use.
:::

(daniel-finn)=

:::{topic} **PETSc-PIC: A Structure-Preserving Particle-In-Cell Method for Electrostatic Solves**
**Daniel Finn**

University at Buffalo

Numerical solutions to the Vlasov-Poisson equations have important
applications in the fields of plasma physics, solar physics, and cosmology.
The goal of this research is to develop a structure-preserving,
electrostatic and gravitational Vlasov-Poisson(-Landau) model using the
Portable, Extensible Toolkit for Scientific Computation (PETSc) and study
the presence of Landau damping in a variety of systems, such as
thermonuclear fusion reactors and galactic dynamics. The PETSc
Particle-In-Cell (PETSc-PIC) model is a highly scalable,
structure-preserving PIC method with multigrid capabilities. In the PIC
method, a hybrid discretization is constructed with a grid of finitely
supported basis functions to represent the electric, magnetic, and/or
gravitational fields, and a distribution of delta functions to represent
the particle field. Collisions are added to the formulation using
a particle-basis Landau collision operator recently added to the PETSc
library.
:::

(derek-gaston)=

:::{topic} **Multiscale, Multiphysics Simulation Through Application Composition Using MOOSE**
**Derek Gaston**

Idaho National Laboratory

Eight years ago, at the PETSc 20 meeting, I introduced the idea of
"Simplifying Multiphysics Through Application Composition" -- the idea
that physics applications can be built in such a way that they can
instantly be combined to tackle complicated multiphysics problems.
This talk will serve as an update on those plans. I will detail the
evolution of that idea, how we’re using it in practice, how well it’s
working, and where we’re going next. Motivating examples will be drawn
from nuclear engineering, and practical aspects, such as testing, will
be explored.
:::

(alex-grant)=

:::{topic} **High-order FEM implementation in AMReX using PETSc**
**Alex Grant**

, Karthik Chockalingam, Xiaohu Guo

Science and Technology Facilities Council (STFC), UK

AMReX is a C++ block-structured framework for adaptive mesh refinement,
typically used for finite difference or finite volume codes. We describe
a first attempt at a finite element implementation in AMReX using PETSc.
AMReX splits the domain of uniform elements into rectangular boxes at each
refinement level, with higher levels overlapping rather than replacing
lower levels and with each level solved independently. AMReX boxes can be
cell-centered or nodal; we use cell centered boxes to represent the geometry
and mesh and nodal boxes to identify nodes to constrain and store results
for visualization. We convert AMReX’s independent spatial indices into
a single global index, then use MATMPIAIJ to assemble the system matrix per
refinement level. In an unstructured grid, isoparametric mapping is
required for each element; using a structured grid avoids both this
and indirect addressing, which provides significant potential performance
advantages. We have solved time-dependent parabolic equations and seen
performance gains compared to unstructured finite elements. Further
developments will include arbitrary higher-order schemes and
multi-level hp refinement with arbitrary hanging nodes. PETSc uses AMReX
domain decomposition to partition the matrix and right-hand vectors. For
each higher level, not all of the domain will be refined, but AMReX’s
indices cover the whole space - this poses an indexing challenge and can
lead to over-allocation of memory. It is still to be explored whether DM
data structures would provide a benefit over MATMPIAIJ.
:::

(aidan-hamilton)=

:::{topic} **Scalable Riemann Solvers with the Discontinuous Galerkin Method for Hyperbolic Network Simulation**
**Aidan Hamilton**

, Jing-Mei Qiu, Hong Zhang

University of Delaware

We develop highly efficient and effective computational algorithms
and simulation tools for fluid simulations on a network. The mathematical
models are a set of hyperbolic conservation laws on the edges of a network, as
well as coupling conditions on junctions of a network. For example, the
shallow water system, together with flux balance and continuity conditions
at river intersections, model water flows on a river network. The
computationally accurate and robust discontinuous Galerkin methods,
coupled with explicit strong-stability preserving Runge-Kutta methods, are
implemented for simulations on network edges. Meanwhile, linear and
nonlinear scalable Riemann solvers are being developed and implemented at
network vertices. These network simulations result in tools built using
PETSc and DMNetwork software libraries for the scientific community in
general. Simulation results of a shallow water system on a Mississippi
river network with over one billion network variables are performed on an
extreme- scale computer using up to 8,192 processors with an optimal
parallel efficiency. Further potential applications include traffic flow
simulations on a highway network and blood flow simulations on an arterial
network, among many others
:::

(zakariae-jorti)=

:::{topic} **A mimetic finite difference based quasi-static magnetohydrodynamic solver for force-free plasmas in tokamak disruptions**
**Zakariae Jorti**

, Qi Tang, Konstantin Lipnikov, Xianzhu Tang

Los Alamos National Laboratory

Force-free plasmas are a good approximation in the low-beta case, where the
plasma pressure is tiny compared with the magnetic pressure. On time scales
long compared with the transit time of Alfvén waves, the evolution of
a force-free plasma is most efficiently described by a quasi-static
magnetohydrodynamic (MHD) model, which ignores the plasma inertia. In this
work, we consider a regularized quasi-static MHD model for force-free
plasmas in tokamak disruptions and propose a mimetic finite difference
(MFD) algorithm, which is targeted at applications such as the cold
vertical displacement event (VDE) of a major disruption in an ITER-like
tokamak reactor. In the case of whole device modeling, we further consider
the two sub-domains of the plasma region and wall region and their coupling
through an interface condition. We develop a parallel, fully implicit, and
scalable MFD solver based on PETSc and its DMStag data structure to discretize the five-field quasi-static perpendicular plasma dynamics
model on a 3D structured mesh. The MFD spatial discretization is coupled
with a fully implicit DIRK scheme. The full algorithm exactly preserves the
divergence-free condition of the magnetic field under a generalized Ohm’s
law. The preconditioner employed is a four-level fieldsplit preconditioner,
created by combining separate preconditioners for individual
fields, that calls multigrid or direct solvers for sub-blocks or exact
factorization on the separate fields. The numerical results confirm the
divergence-free constraint is strongly satisfied and demonstrate the
performance of the fieldsplit preconditioner and overall algorithm. The
simulation of ITER VDE cases over the actual plasma current diffusion time
is also presented.
:::

(jakub-kruzik)=

:::{topic} **PERMON library for quadratic programming**
**Jakub Kruzik**

, Marek Pecha, David Horak

VSB - Technical University of Ostrava, Czechia

PERMON (Parallel, Efficient, Robust, Modular, Object-oriented, Numerical)
is a library based on PETSc for solving quadratic programming (QP)
problems. We will present PERMON usage on our implementation of the FETI
(finite element tearing and interconnecting) method. This FETI
implementation involves a chain of QP transformations, such as
dualization, which simplifies a given QP. We will also discuss some useful
options, like viewing Karush-Kuhn-Tucker (optimality) conditions for each
QP in the chain. Finally, we will showcase some QP applications solved by
PERMON, such as the solution of contact problems for hydro-mechanical
problems with discrete fracture networks or the solution of support vector
machines using the PermonSVM module.
:::

(vanessa-lopez-marrero)=

:::{topic} **Towards enabling digital twins capabilities for a cloud chamber**
**Vanessa Lopez-Marrero**

, Kwangmin Yu, Tao Zhang, Mohammad Atif, Abdullah Al Muti Sharfuddin, Fan Yang, Yangang Liu, Meifeng Lin, Foluso Ladeinde, Lingda Li

Brookhaven National Laboratory

Particle-resolved direct numerical simulations (PR-DNS), which resolve not
only the smallest turbulent eddies but also track the development and
the motion of individual particles, are an essential tool for studying
aerosol-cloud-turbulence interactions. For instance, PR-DNS may complement
experimental facilities designed to study key physical processes in
a controlled environment and therefore serve as digital twins for such
cloud chambers. In this talk, we will present our ongoing work aimed at
enabling the use of PR-DNS for this purpose. We will describe the physical
model used, which consists of a set of fluid dynamics equations for
air velocity, temperature, and humidity coupled with a set of equations
for particle (i.e., droplet) growth/tracing. The numerical method used to
solve the model, which employs PETSc solvers in its implementation, will be
discussed, as well as our current efforts to assess performance and
scalability of the numerical solver.
:::

(david-may)=

:::{topic} **PETSc ROCKS**
**David May**

University of California, San Diego

The field of Geodynamics is concerned with understanding
the deformation history of the solid Earth over millions to billions of
year time scales. The infeasibility of extracting a spatially and
temporally complete geological record based on rocks that are currently
exposed at the surface of the Earth compels many geodynamists to employ
computational simulations of geological processes.

In this presentation I will discuss several geodynamic software packages
which utilize PETSc. I intend to highlight how PETSc has played an
important role in enabling and advancing state-of-the-art in geodynamic
software. I will also summarize my own experiences and observations of how
geodynamic-specific functionality has driven the
development of new general-purpose PETSc functionality.
:::

(heeho-park)=

:::{topic} **PETSc Newton Trust-Region for Simulating Large-scale Engineered Subsurface Systems with PFLOTRAN**
**Heeho Park**

, Glenn Hammond, Albert Valocchi

Sandia National Laboratories

Modeling large-scale engineered subsurface systems entails significant
additional numerical challenges. For nuclear waste repository, the
challenges arise from: (a) the need to accurately represent both the waste
form processes and shafts, tunnel, and barriers at the small spatial scale
and the large-scale transport processes throughout geological formations;
(b) the strong contrast in material properties such as porosity and
permeability, and the nonlinear constitutive relations for multiphase flow;
(c) the decay of high level nuclear wastes cause nearby water to boil off
into steam leading to dry-out. These can lead to an ill-conditioned
Jacobian matrix and non-convergence with Newton’s method due to
discontinuous nonlinearity in constitutive models.

We apply the open-source simulator PFLOTRAN which employs a FV
discretization and uses the PETSc parallel framework. We implement within
PETSc the general-purpose nonlinear solver, Newton trust-region dogleg
Cauchy (NTRDC) and Newton trust-region (NTR) to demonstrate the
effectiveness of these advanced solvers. The results demonstrate speed-up
compared to the default solvers of PETSc and complete simulations that were
never completed with them.

SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.
:::

(ashish-patel)=

:::{topic} **Scalable cloud-native thermo-mechanical solvers using PETSc**
**Ashish Patel**

, Jeremy Theler, Francesc Levrero-Florencio, Nabil Abboud, Mohammad Sarraf Joshaghani, Scott McClennan

Ansys, Inc.

This talk presents how the Ansys OnScale team uses PETSc to
develop finite element-based thermo-mechanical solvers for scalable
nonlinear simulations on the cloud. We will first provide an overview of
features available in the solver and then discuss how some of the PETSc
objects, like DMPlex and TS, have helped us speed up our development
process. We will also talk about the workarounds we have incorporated to
address the current limitations of some of the functions from DMPlex for
our use cases involving multi-point constraints and curved elements.
Finally, we demonstrate how PETSc’s linear solvers scale on multi-node
cloud instances.
:::

(spencer-patty)=

:::{topic} **Intel oneAPI Math Kernel Library, what’s new and what’s next?**
**Spencer Patty**

Intel Corporation

This talk provides an overview of Intel® oneAPI Math Kernel Library (oneMKL)
product and software for supporting optimized math routines for both Intel
CPUs and GPUs. Given that PETSc already utilizes several BLAS/LAPACK/Sparse
BLAS routines from oneMKL for Intel CPU and as part of the Aurora project
with Argonne, we discuss the use of OpenMP offload APIs for Intel GPUs.
We explore software and hardware improvements for better sparse linear
algebra performance and have an informal discussion of how to further
support the PETSc community.
:::

(marek-pecha)=

:::{topic} **Distributed Machine Learning for Natural Hazard Applications Using PERMON**
**Marek Pecha**

, David Horak, Richard Tran Mills, Zachary Langford

VSB – Technical University of Ostrava, Czechia

We will present a software solution for distributed machine learning
supporting computation on multiple GPUs running on the top of the PETSc
framework, which we will demonstrate in applications related to natural
hazard localizations and detections employing supervised uncertainties
modeling. It is called PERMON and is designed for convex optimization
using quadratic programming, and its extension PermonSVM implements
maximal-margin classifier approaches associated with support vector
machines (SVMs). Although deep learning (DL) is getting popular in recent
years, SVMs are still applicable. However, unlike DL, the SVM approach requires
additional feature engineering or feature selection. We will present our
workflow and show how to achieve reasonable models for the application
related to wildfire localization in Alaska.
:::

(joseph-pusztay)=

:::{topic} **Landau Collisions in the Particle Basis with PETSc-PIC**
**Joseph Pusztay**

, Matt Knepley, Mark Adams

University at Buffalo

The kinetic description of plasma encompasses the fine scale interaction of
the various bodies that it is comprised of, and applies to a litany of
experiments ranging from the laboratory magnetically confined fusion
plasma, to the scale of the solar corona. Of great import to these
descriptions are collisions in the grazing limit, which transfer momentum
between components of the plasma. Until recently, these have best been
described conservatively by finite element discretizations of the Landau
collision integral. In recent years a particle discretization has been
proven to preserve the appropriate eigenfunctions of the system, as well as
physically relevant quantities. I present here the recent work on a purely
particle discretized Landau collision operator which preserves mass,
momentum, and energy, with associated accuracy benchmarks in PETSc.
:::

(jose-e-roman)=

:::{topic} **Experiences in solving nonlinear eigenvalue problems with SLEPc**
**Jose E. Roman**

Universitat Politècnica de València

One of the unique features of SLEPc is the module for the general nonlinear
eigenvalue problem (NEP), where we want to compute a few eigenvalues and
corresponding eigenvectors of a large-scale parameter-dependent matrix
T(lambda). In this talk, we will illustrate the use of NEP in the context
of two applications, one of them coming from the characterization of
resonances in nanophotonic devices, and the other one from a problem in
aeroacoustics.
:::

(barry-smith)=

:::{topic} **Some thoughts on the future of PETSc**:
**Barry Smith**

Flatiron Institute

How will PETSc evolve and grow in the future? How can PETSc algorithms and
simulations be integrated into the emerging world of machine learning and
deep neural networks? I will provide an informal discussion of these topics
and my thoughts.
:::

(tim-steinhoff)=

:::{topic} **Software Development and Deployment Including PETSc**
**Tim Steinhoff**

, Volker Jacht

Gesellschaft für Anlagen- und Reaktorsicherheit (GRS), Germany

Once it is decided that PETSc shall handle certain numerical subtasks in
your software the question may arise about how to smoothly incorporate PETSc
into the overall software development and deployment processes. In this
talk, we present our approach how to handle such a situation for the code
family AC2 which is developed and distributed by GRS. AC2 is used to
simulate the behavior of nuclear reactors during operation, transients,
design basis and beyond design basis accidents up to radioactive releases
to the environment. The talk addresses our experiences, what challenges had
to be overcome, and how we make use of GitLab, CMake, and Docker techniques
to establish clean incorporation of PETSc into our software development
cycle.
:::

(hansol-suh)=

:::{topic} **TaoADMM**
**Hansol Suh**

Argonne National Laboratory

In this tutorial, we will be giving an introduction to ADMM algorithm on
TAO. It will include walking through ADMM algorithm with some real-life
example, and tips on setting up the framework to solve ADMM on PETSc/TAO.
:::

(maria-vasilyeva)=

:::{topic} **Numerical upscaling of network models using PETSc**
**Maria Vasilyeva**

Texas A&M University-Corpus Christi

Multiphysics models on large networks are used in many applications, for
example, pore network models in reservoir simulation, epidemiological
models of disease spread, ecological models on multispecies interaction,
medical applications such as multiscale multidimensional simulations of
blood flow, etc. This work presents the construction of the numerical
upscaling and multiscale method for network models. An accurate
coarse-scale approximation is generated by solving local problems in
sub-networks. Numerical implementation of the network model is performed
based on the PETSc DMNetwork framework. Results are presented for square
and random heterogeneous networks generated by OpenPNM.
:::

(berend-van-wachem)=

:::{topic} **MultiFlow: A coupled balanced-force framework to solve multiphase flows in arbitrary domains**
**Berend van Wachem**

, Fabien Evrard

University of Magdeburg, Germany

Since 2000, we have been working on a finite-volume numerical framework
“MultiFlow ” to predict multiphase flows in arbitrary domains by solving
various flavors of the incompressible and compressible Navier-Stokes
equations using PETSc. This framework enables the simulation of creeping,
laminar and turbulent flows with droplets and/or particles at various
scales. It relies on a collocated variable arrangement of the unknown
variables and momentum-weighted-interpolation to determine the fluxes at
the cell faces to couple velocity and pressure. To maximize robustness, the
governing flow equations are solved in a coupled fashion, i.e., as part of
a single equation system involving all flow variables. Various modules are
available within the code in addition to its core flow solver, allowing it to
model interfacial and particulate flows at various flow regimes and scales.
The framework heavily relies on the PETSc library not only to solve the
system of governing equations but also for the handling of unknown
variables, parallelization of the computational domain, and exchange of
data over processor boundaries. We are now in the 3rd generation of our
code, currently using a combination of DMDA, and DMPlex with DMForest/p4est
frameworks to allow for the adaptive octree refinement of the
computational mesh. In this contribution, we will present the details of
the discretization and the parallel implementation of our framework and
describe its interconnection with the PETSc library. We will then present
some applications of our framework, simulating multiphase flows at various
scales, flows regimes, and resolutions. During this contribution, we will
also discuss our framework's challenges and future objectives.
:::

(matt-young)=

:::{topic} **PETSc in the Ionosphere**
**Matt Young**

University of New Hampshire

A planet's ionosphere is the region of its atmosphere where a fraction
of the constituent atoms or molecules have separated into positive ions and
electrons. Earth's ionosphere extends from roughly 85 km during the day
(higher at night) to the edge of space. This partially ionized regime
exhibits collective behavior and supports electromagnetic phenomena that do
not exist in the neutral (i.e., unionized) atmosphere. Furthermore, the
abundance of neutral atoms and molecules leads to phenomena that do not
exist in the fully ionized space environment. In a relatively narrow
altitude range of Earth's ionosphere called the "E region", electrons
behave as typical charged particles -- moving in response to combined
electric and magnetic fields -- while ions collide too frequently with
neutral molecules to respond to the magnetic field. This difference leads
to the Farley-Buneman instability when the local electric field is strong
enough. The Farley-Buneman instability regularly produces irregularities in
the charged-particle densities that are strong enough to reflect radio
signals. Recent research suggests that fully developed turbulent
structures can disrupt GPS communication.

The Electrostatic Parallel Particle-in-Cell (EPPIC) numerical simulation
self-consistently models instability growth and evolution in the E-region
ionosphere. The simulation includes a hybrid mode that treats electrons as
a fluid and treats ions as particles. The particular fluid electron model
requires the solution of an elliptic partial differential equation for the
electrostatic potential at each time step, which we represent as a linear
system that the simulation solves with PETSc. This presentation will
describe the original development of the 2D hybrid simulation, previous
results, recent efforts to extend to 3D, and implications for modeling GPS
scintillation.

The Electrostatic Parallel Particle-in-Cell (EPPIC) numerical simulation
self-consistently models instability growth and evolution in the E-region
ionosphere. The simulation includes a hybrid mode that treats electrons as
a fluid and treats ions as particles. The particular fluid electron model
requires the solution of an elliptic partial differential equation for the
electrostatic potential at each time step, which we represent as a linear
system that the simulation solves with PETSc. This presentation will describe
the original development of the 2D hybrid simulation, previous results, recently
efforts to extend to 3D, and implications to modeling GPS scintillation.
:::

(chonglin-zhang)=

:::{topic} **XGCm: An Unstructured Mesh Gyrokinetic Particle-in-cell Code for Exascale Fusion Plasma Simulations**
**Chonglin Zhang**

, Cameron W. Smith, Mark S. Shephard

Rensselaer Polytechnic Institute (RPI)

We report the development of XGCm, a new distributed unstructured mesh
gyrokinetic particle-in-cell (PIC) code, short for x-point included
gyrokinetic code mesh-based. The code adopts the physical algorithms of the
well-established XGC code. It is intended as a testbed for experimenting
with new numerical and computational algorithms, which can eventually be
adopted in XGC and other PIC codes. XGCm is developed on top of several
open-source libraries, including Kokkos, PETSc, Omega, and PUMIPic. Omega
and PUMIPic rely on Kokkos to interact with the GPU accelerator, while
PETSc solves the gyrokinetic Poisson equation on either CPU or GPU. We
first discuss the numerical algorithms of our mesh-centric approach for
performing PIC calculations. We then present a code validation study using
the cyclone base case with ion temperature gradient turbulence (case 5 from
Burckel, etc. Journal of Physics: Conference Series 260, 2010, 012006).
Finally, we discuss the performance of XGCm and present weak scaling
results using up to the full system (27,648 GPUs) of the Oak Ridge National
Laboratory’s Summit supercomputer. Overall, XGCm executes all PIC
operations on the GPU accelerators and exhibits good performance and
portability.
:::

(hong-zhang-ms)=

:::{topic} **PETSc DMNetwork: A Library for Scalable Network PDE-Based Multiphysics Simulation**
**Hong Zhang (Ms.)**

Argonne National Laboratory, Illinois Institute of Technology

We present DMNetwork, a high-level set of routines included in the PETSc
library for the simulation of multiphysics phenomena over large-scale
networked systems. The library aims at applications with networked
structures like those in electrical, water, and traffic
distribution systems. DMNetwork provides data and topology management,
parallelization for multiphysics systems over a network, and hierarchical
and composable solvers to exploit the problem structure. DMNetwork eases
the simulation development cycle by providing the necessary infrastructure
to define and query the network components through simple abstractions.
:::

(hui-zhou)=

:::{topic} **MPI Multiply Threads**
**Hui Zhou**

Argonne National Laboratory

In the traditional MPI+Thread programming paradigm, MPI and OpenMP each
form their own parallelization. MPI is unaware of the thread
context. The requirement of thread safety and message ordering forces MPI
library to blindly add critical sections, unnecessarily serializing the
code. On the other hand, OpenMP cannot use MPI for inter-thread
communications. Developers often need hand-roll algorithms for
collective operations and non-blocking synchronizations.

MPICH recently added a few extensions to address the root issues in
MPI+Thread. The first extension, MPIX stream, allows applications to
explicitly pass the thread context into MPI. The second extension, thread
communicator, allows individual threads in an OpenMP parallel region to use
MPI for inter-thread communications. In particular, this allows an OpenMP
program to use PETSc within a parallel region.

Instead of MPI+Thread, we refer to this new pattern as MPI x Thread.
:::

(junchao-zhang)=

:::{topic} **PETSc on the GPU**
**Junchao Zhang**

Argonne National Laboratory

In this mini-tutorial, we will briefly introduce the GPU backends of PETSc and how to configure, build, run
and profile PETSc on GPUs. We also talk about how to port your PETSc code to GPUs.
:::

(hong-zhang-mr)=

:::{topic} **PETSc and PyTorch Interoperability**
**Hong Zhang (Mr.)**

Argonne National Laboratory

In this mini-tutorial, we will introduce: How to convert between PETSc vectors/matrices and PyTorch tensors;
How to generate Jacobian or action of Jacobian with PyTorch and use it in PETSc; How to use PETSc and PyTorch
for solving ODEs and training neural ODEs.
:::

(stefano-zampini)=

:::{topic} **petsc4py**
**Stefano Zampini**

King Abdullah University of Science and Technology (KAUST)

In this mini-tutorial, we will introduce the Python binding of PETSc.
:::

(matt-knepley)=

:::{topic} **DMPlex**
**Matt Knepley**

University at Buffalo

In this mini-tutorial, we will introduce the DMPlex class in PETSc.
:::

(id2)=

:::{topic} **DMSwarm**
**Joseph Pusztay**

University at Buffalo

In this mini-tutorial, we will introduce the DMSwarm class in PETSc.
:::

[c_04]: https://petsc.gitlab.io/annual-meetings/2023/slides/HongZhangMr.ipynb
[s_00]: https://petsc.gitlab.io/annual-meetings/2023/tutorials/petsc_annual_meeting_2023_tutorial.pdf
[s_01]: https://petsc.gitlab.io/annual-meetings/2023/slides/BarrySmith.pdf
[s_02]: https://petsc.gitlab.io/annual-meetings/2023/slides/SaraCalandrini.pdf
[s_03]: https://petsc.gitlab.io/annual-meetings/2023/slides/BerendvanWachem.pdf
[s_04]: https://petsc.gitlab.io/annual-meetings/2023/slides/HongZhangMr.pdf
[s_06]: https://petsc.gitlab.io/annual-meetings/2023/slides/DavidMay.pdf
[s_07]: https://petsc.gitlab.io/annual-meetings/2023/slides/TimSteinhoff.pdf
[s_08]: https://petsc.gitlab.io/annual-meetings/2023/slides/DerekGaston.pdf
[s_09]: https://petsc.gitlab.io/annual-meetings/2023/slides/HeehoPark.pdf
[s_10]: https://petsc.gitlab.io/annual-meetings/2023/slides/JoseERoman.pdf
[s_11]: https://petsc.gitlab.io/annual-meetings/2023/slides/HuiZhou.pdf
[s_12]: https://petsc.gitlab.io/annual-meetings/2023/slides/JunchaoZhang.pdf
[s_13]: https://petsc.gitlab.io/annual-meetings/2023/slides/JustinChang.pdf
[s_14]: https://petsc.gitlab.io/annual-meetings/2023/slides/StefanoZampini.pdf
[s_15]: https://petsc.gitlab.io/annual-meetings/2023/slides/JacobFaibussowitsch.pdf
[s_16]: https://petsc.gitlab.io/annual-meetings/2023/slides/LucBerger-Vergiat.pdf
[s_17]: https://petsc.gitlab.io/annual-meetings/2023/slides/SpencerPatty.pdf
[s_20]: https://petsc.gitlab.io/annual-meetings/2023/slides/ZakariaeJorti.pdf
[s_21]: https://petsc.gitlab.io/annual-meetings/2023/slides/AlexGrant.pdf
[s_22]: https://petsc.gitlab.io/annual-meetings/2023/slides/MohamadIbrahimCheikh.pdf
[s_23]: https://petsc.gitlab.io/annual-meetings/2023/slides/HongZhangMs.pdf
[s_24]: https://petsc.gitlab.io/annual-meetings/2023/slides/ChonglinZhang.pdf
[s_25]: https://petsc.gitlab.io/annual-meetings/2023/slides/DanielFinn.pdf
[s_26]: https://petsc.gitlab.io/annual-meetings/2023/slides/JosephPusztay.pdf
[s_27]: https://petsc.gitlab.io/annual-meetings/2023/slides/JosephPusztayDMSwarm.pdf
[s_28]: https://petsc.gitlab.io/annual-meetings/2023/slides/AidanHamilton.pdf
[s_29]: https://petsc.gitlab.io/annual-meetings/2023/slides/MariaVasilyeva.pdf
[s_30]: https://petsc.gitlab.io/annual-meetings/2023/slides/HansolSuh.pdf
[s_31]: https://petsc.gitlab.io/annual-meetings/2023/slides/MattYoung.pdf
[s_32]: https://petsc.gitlab.io/annual-meetings/2023/slides/BlaiseBourdin.pdf
[s_33]: https://petsc.gitlab.io/annual-meetings/2023/slides/JakubKruzik.pdf
[s_34]: https://petsc.gitlab.io/annual-meetings/2023/slides/MarekPecha.pdf
[v_00]: https://youtu.be/rm34jR-p0xk
[v_01]: https://youtu.be/vqx6b3Hg_6k
[v_02]: https://youtu.be/pca0jT86qxU
[v_03]: https://youtu.be/obdKq9SBpfw
[v_04]: https://youtu.be/r_icrhAbmSQ
[v_06]: https://youtu.be/0BplD93cSe8
[v_07]: https://youtu.be/vENWhqp7XlI
[v_08]: https://youtu.be/aHL4FIu_q6k
[v_10]: https://youtu.be/2qhtMsvYw4o
[v_11]: https://youtu.be/plfB7XVoqSQ
[v_12]: https://youtu.be/8tmswLh3ez0
[v_14]: https://youtu.be/hhe0Se4pkSg
[v_15]: https://youtu.be/IbjboeTYuAE
[v_17]: https://youtu.be/Baz4GVp4gQc
[v_18]: https://youtu.be/jURFyoONRko
[v_20]: https://youtu.be/k8PozEb4q40
[v_21]: https://youtu.be/0L9boKxXPmA
[v_22]: https://youtu.be/e101L03bO8A
[v_23]: https://youtu.be/heWln8ZIrHc
[v_24]: https://youtu.be/sGP_9JStYR8
[v_25]: https://youtu.be/b-V_j4Vs2OA
[v_26]: https://youtu.be/b-V_j4Vs2OA?t=1200
[v_27]: https://youtu.be/FaAVV8-lnZI
[v_28]: https://youtu.be/Ys0CZLha1pA
[v_29]: https://youtu.be/Br-9WgvPG7Q
[v_30]: https://youtu.be/8WvZ9ggB3x0
[v_31]: https://youtu.be/hS3nOmX_g8I
[v_32]: https://youtu.be/mfdmVbHsYK0
[v_33]: https://youtu.be/2dC_NkGBBnE
[v_34]: https://youtu.be/2dC_NkGBBnE?t=1194