File: index.rst

package info (click to toggle)
firefox-esr 128.13.0esr-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 4,230,012 kB
  • sloc: cpp: 7,103,971; javascript: 6,088,450; ansic: 3,653,980; python: 1,212,330; xml: 594,604; asm: 420,652; java: 182,969; sh: 71,124; makefile: 20,747; perl: 13,449; objc: 12,399; yacc: 4,583; cs: 3,846; pascal: 2,973; lex: 1,720; ruby: 1,194; exp: 762; php: 436; lisp: 258; awk: 247; sql: 66; sed: 54; csh: 10
file content (39 lines) | stat: -rw-r--r-- 1,309 bytes parent folder | download | duplicates (6)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Machine Learning
================

This component is an experimental machine learning local inference engine based on
Transformers.js and the ONNX runtime.

In the example below, an image is converted to text using the `image-to-text` task.


.. code-block:: javascript

  const {PipelineOptions, EngineProcess } = ChromeUtils.importESModule("chrome://global/content/ml/EngineProcess.sys.mjs");

  // First we create a pipeline options object, which contains the task name
  // and any other options needed for the task
  const options = new PipelineOptions({taskName: "image-to-text" });

  // Next, we create an engine parent object via EngineProcess
  const engineParent = await EngineProcess.getMLEngineParent();

  // We then create the engine object, using the options
  const engine = engineParent.getEngine(options);

  // Preparing a request
  const request = {url: "https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg"};

  // At this point we are ready to do some inference.
  const res = await engine.run(request);

  // The result is a string containing the text extracted from the image
  console.log(res);


Supported Inference Tasks
:::::::::::::::::::::::::

The following tasks are supported by the machine learning engine:

.. js:autofunction:: imageToText