File: testimonials.shtml

package info (click to toggle)
slurm-wlm 22.05.8-4%2Bdeb12u3
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 48,492 kB
  • sloc: ansic: 475,246; exp: 69,020; sh: 8,862; javascript: 6,528; python: 6,444; makefile: 4,185; perl: 4,069; pascal: 131
file content (173 lines) | stat: -rw-r--r-- 7,349 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
<!--#include virtual="header.txt"-->

<h1>Customer Testimonials</h1>
<HR SIZE=4>

<i>
"With Oxford providing HPC not just to researchers within the
University, but to local businesses and in collaborative projects,
such as the T2K and NQIT projects, the Slurm scheduler really was the
best option to ensure different service level agreements can be
supported. If you look at the Top500 list of the World's fastest
supercomputers, they're now starting to move to Slurm. The scheduler
was specifically requested by the University to support GPUs and the
heterogeneous estate of different CPUs, which the previous TORQUE
scheduler couldn't, so this forms quite an important part of the
overall HPC facility."<br><br>
<a href=http://www.hpcwire.com/off-the-wire/new-hpc-cluster-to-benefit-the-university-of-oxford/>Julian Fielden, Managing Director at OCF</a>
</i>
<HR SIZE=4>

<i>
"In 2010, when we embarked upon our mission to port Slurm to our Cray XT and XE
systems, we discovered first-hand the high quality software engineering that
has gone into the creation of this product. From its very core Slurm has been
designed to be extensible and flexible. Moreover, as our work progressed, we
discovered the high level of technical expertise possessed by SchedMD who was
very quick to respond to our questions with insightful advice, suggestions and
clarifications. In the end we arrived at a solution that more than satisfied
our needs. The project was so successful we have now migrated all our production
science systems to Slurm, including our 20 cabinet Cray XT5 system. The ease
with which we have made this transition is testament to the robustness and
high quality of the product but also to the no-fuss installation and
configuration procedure and the high quality documentation. We have no qualms
about recommending Slurm to any facility, large or small, who wish to make the
break from the various commercial options available today"<br><br>
Colin McMurtrie, Head of Systems, Swiss National Supercomputing Centre
</i>
<HR SIZE=4>

<i>
"Thank you for Slurm! It is one of the nicest pieces of free software
for managing HPC clusters we have come across in a long time.
Both of our Blue Genes are running Slurm and it works fantastically
well.
It's the most flexible, useful scheduling tool I've ever run
across."<br><br>
Adam Todorski, Computational Center for Nanotechnology Innovations,
Rensselaer Polytechnic Institute
</i>
<HR SIZE=4>

<i>
"Awesome! I just read the manual, set it up and it works great.
I tell you, I've used Sun Grid Engine, Torque, PBS Pro and there's
nothing like Slurm."<br><br>
Aaron Knister, Environmental Protection Agency
</i>
<HR SIZE=4>

<i>
"Today our largest IBM computers, BlueGene/L and Purple, ranked #1 and #3
respectively on the November 2005 Top500 list, use Slurm.
This decision reduces large job launch times from tens of minutes to seconds.
This effectively provides
us with millions of dollars with of additional compute resources without
additional cost.  It also allows our computational scientists to use their
time more effectively.  Slurm is scalable to very large numbers of processors,
another essential ingredient for use at LLNL. This means larger computer
systems can be used than otherwise possible with a commensurate increase in
the scale of problems that can be solved. Slurm's scalability has eliminated
resource management from being a concern for computers of any foreseeable
size. It is one of the best things to happen to massively parallel computing."
<br><br>
Dona Crawford, Associate Directory Lawrence Livermore National Laboratory
</i>
<HR SIZE=4>

<i>
"We are extremely pleased with Slurm and strongly recommend it to others
because it is mature, the developers are highly responsive and
it just works."<br><br>
Jeffrey M. Squyres, Pervasive Technology Labs at Indiana University
</i>
<HR SIZE=4>

<i>
"We adopted Slurm as our resource manager over two years ago when it was at
the 0.3.x release level. Since then it has become an integral and important
component of our production research services. Its stability, flexibility
and performance has allowed us to significantly increase the quality of
experience we offer to our researchers."<br><br>
Dr. Greg Wettstein, Ph.D. North Dakota State University
</i>
<HR SIZE=4>

<i>
"SLURM is the coolest thing since the invention of UNIX...
We now can control who can log into [compute nodes] or at least can control
which ones to allow logging into.  This will be a tremendous help for users
who are developing their apps."<br><br>
Dennis Gurgul, Research Computing, Partners Health Care
</i>
<HR SIZE=4>

<i>
"SLURM is a great product that I'd recommend to anyone setting up a cluster,
or looking to reduce their costs by abandoning an existing commercial
resource manager."<br><br>
Josh Lothian, National Center for Computational Sciences,
Oak Ridge National Laboratory
</i>
<HR SIZE=4>

<i>
"SLURM is under active development, is easy to use, works quite well,
and most important to your harried author, it hasn't been a nightmare
to configure or manage. (Strong praise, that.) I would range Slurm as
the best of the three open source batching systems available, by rather
a large margin." <br><br>
Bryan O'Sullivan, Pathscale
</i>
<HR SIZE=4>

<i>
"SLURM scales perfectly to the size of MareNostrum without noticeable
performance degradation; the daemons running on the compute nodes are
light enough to not interfere with the applications' processes and the
status reports are accurate and concise, allowing us to spot possible
anomalies in a single sight." <br><br>
Erest Artiaga, Barcelona Supercomputing Center
</i>
<HR SIZE=4>

<i>
"SLURM was a great help for us in implementing our own very concise
job management system on top of it which could be tailored precisely
to our needs, and which at the same time is very simple to use for
our customers.
In general, we are impressed with the stability, scalability, and performance
of Slurm. Furthermore, Slurm is very easy to configure and use. The fact that
SLURM is open-source software with a free license is also advantageous for us
in terms of cost-benefit considerations." <br><br>
Dr. Wilfried Juling, Direktor, Scientific Supercomputing Center,
University of Karlsruhe
</i>
<HR SIZE=4>

<i>
"I had missed Slurm initially when looking for software for a cluster and
ended up installing Torque. When I found out about Slurm later, it took
me only a couple of days to go from knowing nothing about it to having a
SLURM cluster than ran better than the Torque one. I just wanted to say that
your focus on more "secondary" stuff in cluster software, like security,
usability and ease of getting started is *really* appreciated." <br><br>
Christian Hudson, ApSTAT Technologies
</i>
<HR SIZE=4>

<i>
"SLURM has been adopted as the parallel allocation infrastructure used
in HP's premier cluster stack, XC System Software. Slurm has permitted
easy scaling of parallel applications on cluster systems with thousands
of processors, and has also proven itself to be highly portable and
efficient between interconnects including Quadrics, QsNet, Myrinet,
Infiniband and Gigabit Ethernet."
<br><br>
Bill Celmaster, XC Program Manager, Hewlett-Packard Company
</i>
<HR SIZE=4>

<p style="text-align:center;">Last modified 14 April 2015</p>

<!--#include virtual="footer.txt"-->