1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
|
Source: ollama-python
Maintainer: Home Assistant Team <team+homeassistant@tracker.debian.org>
Uploaders:
Edward Betts <edward@4angle.com>,
Section: python
Priority: optional
Build-Depends:
debhelper-compat (= 13),
dh-sequence-python3,
pybuild-plugin-pyproject,
python3-all,
python3-hatchling,
Build-Depends-Indep:
python3-hatch-vcs,
python3-httpx <!nocheck>,
python3-pillow <!nocheck>,
python3-pydantic <!nocheck>,
python3-pytest <!nocheck>,
python3-pytest-asyncio <!nocheck>,
python3-pytest-cov <!nocheck>,
python3-pytest-httpserver <!nocheck>,
Rules-Requires-Root: no
Standards-Version: 4.7.2
Homepage: https://github.com/jmorganca/ollama-python
Vcs-Browser: https://salsa.debian.org/homeassistant-team/deps/ollama-python
Vcs-Git: https://salsa.debian.org/homeassistant-team/deps/ollama-python.git
Testsuite: autopkgtest-pkg-pybuild
Package: python3-ollama
Architecture: all
Depends:
${misc:Depends},
${python3:Depends},
Description: Library for interacting with the Ollama server and its AI models
This library provides functionality for integrating with an Ollama server to
interact with AI language models and create conversational experiences. It
allows querying and generating text through an API that communicates with the
server, supporting various operations such as model management, message
exchange, and prompt handling. Ollama requires configuration to connect to a
network-accessible server, after which it can be used to fetch and generate
information based on context received from Home Assistant or similar
platforms. Through model specification and prompt templates, the library
adapts responses to the specific environment, although it operates without
direct command over connected devices.
|