File: README.md

package info (click to toggle)
pocketsphinx 5.0.4-2
  • links: PTS, VCS
  • area: main
  • in suites:
  • size: 51,236 kB
  • sloc: ansic: 54,519; python: 2,438; sh: 566; cpp: 410; perl: 342; yacc: 93; lex: 50; makefile: 30
file content (44 lines) | stat: -rw-r--r-- 1,380 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
PocketSphinx Examples
=====================

This directory contains some examples of basic PocketSphinx library
usage in C and Python.  To compile the C examples, you can build the
target `examples`.  If you want to see how it works manually, either
use the library directly in-place, for example, with `simple.c`:

    cmake -DBUILD_SHARED_LIBS=OFF .. && make
    cc -o simple simple.c -I../include -Iinclude -L. -lpocketsphinx -lm
    
Or if PocketSphinx is installed:

    cc -o simple simple.c $(pkg-config --static --libs --cflags pocketsphinx)

If PocketSphinx has not been installed, you will need to set the
`POCKETSPHINX_PATH` environment variable to run the examples:

    POCKETSPHINX_PATH=../model ./simple

The Python scripts, assuming you have installed the `pocketsphinx`
module (see [the top-leve README](../README.md) for instructions), can
just be run as-is:

    python simple.py spam.wav
    
Simplest possible example
-------------------------

The examples `simple.c` and `simple.py` read an entire audio file
(only WAV files are supported) and recognize it as a single, possibly
long, utterance.

Segmentation
------------

The example `segment.py` uses voice activity detection to *segment*
the input stream into speech-like regions.

Live recognition
----------------

Finally, the examples `live.c` and `live.py` do online segmentation
and recognition.