File: index.rst

package info (click to toggle)
firefox-esr 140.4.0esr-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 4,539,276 kB
  • sloc: cpp: 7,381,286; javascript: 6,388,710; ansic: 3,710,139; python: 1,393,780; xml: 628,165; asm: 426,918; java: 184,004; sh: 65,742; makefile: 19,302; objc: 13,059; perl: 12,912; yacc: 4,583; cs: 3,846; pascal: 3,352; lex: 1,720; ruby: 1,226; exp: 762; php: 436; lisp: 258; awk: 247; sql: 66; sed: 54; csh: 10
file content (36 lines) | stat: -rw-r--r-- 1,301 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Firefox AI Runtime
==================

This component is an experimental machine learning local inference runtime based on
`Transformers.js <https://huggingface.co/docs/transformers.js/index>`_ and
the `ONNX runtime <https://onnxruntime.ai/>`_. You can use the component to leverage
the inference runtime in the context of the browser. To try out some inference tasks,
you can refer to the
`1000+ models <https://huggingface.co/models?library=transformers.js>`_
that are available in the Hugging Face Hub that are compatible with this runtime.

To enable it, flip the `browser.ml.enable` preference to `true` in `about:config`
then visit **about:inference** (Nightly only) or add the following snippet of code
into your (privileged) Javascript code in Firefox or in the browser console:

.. code-block:: javascript

  const { createEngine } = ChromeUtils.importESModule("chrome://global/content/ml/EngineProcess.sys.mjs");
  const engine = await createEngine({taskName: "summarization"});
  const request = { args:  ["This is the text to summarize"]};
  const res = await engine.run(request);
  console.log(res[0]["summary_text"]);


Learn more about the platform:

.. toctree::
   :maxdepth: 1

   architecture
   api
   notifications
   models
   perf
   extensions
   extensions-api-example/README