1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101
|
================
REST API Usage
================
Authentication
==============
By default, the authentication is configured to the `"basic" mode`_. You need
to provide an `Authorization` header in your HTTP requests with a valid
username(the password is not used). The "admin" username is granted all
privileges, whereas any other username is recognize as having standard
permissions.
.. _"basic" mode: https://tools.ietf.org/html/rfc7617
You can customize permissions by specifying a different `policy_file` than the
default one.
If you set the `api.auth_mode` value to `keystone`, the OpenStack Keystone
middleware will be enabled for authentication. It is then needed to
authenticate against Keystone and provide a `X-Auth-Token` header with a valid
token for each request sent to Gnocchi's API.
If you set the `api.auth_mode` value to `remoteuser`, Gnocchi will look at the
HTTP server REMOTE_USER environment variable to get the username. Then the
permissions model is the same as the "basic" mode.
Metrics
=======
Gnocchi provides an object type that is called |metric|. A |metric|
designates any thing that can be measured: the CPU usage of a server, the
temperature of a room or the number of bytes sent by a network interface.
A |metric| only has a few properties: a UUID to identify it, a name, the
|archive policy| that will be used to store and aggregate the |measures|.
Create
------
To create a |metric|, the following API request should be used:
{{ scenarios['create-metric']['doc'] }}
.. note::
Once the |metric| is created, the |archive policy| attribute is fixed and
unchangeable. The definition of the |archive policy| can be changed through
the :ref:`archive_policy endpoint<archive-policy-patch>` though.
Read
----
Once created, you can retrieve the |metric| information:
{{ scenarios['get-metric']['doc'] }}
List
----
To retrieve the list of all the |metrics| created, use the following request:
{{ scenarios['list-metric']['doc'] }}
Pagination
~~~~~~~~~~
Considering the large volume of |metrics| Gnocchi will store, query results
are limited to `max_limit` value set in the configuration file. Returned
results are ordered by |metrics|' id values. To retrieve the next page of
results, the id of a |metric| should be given as `marker` for the beginning
of the next page of results.
Default ordering and limits as well as page start can be modified
using query parameters:
{{ scenarios['list-metric-pagination']['doc'] }}
Delete
------
Metrics can be deleted through a request:
{{ scenarios['delete-metric']['doc'] }}
See also :ref:`Resources <resources-endpoint>` for similar operations specific
to metrics associated with a |resource|.
Measures
========
Push
----
It is possible to send |measures| to the |metric|:
{{ scenarios['post-measures']['doc'] }}
If there are no errors, Gnocchi does not return a response body, only a simple
status code. It is possible to provide any number of |measures|.
.. IMPORTANT::
While it is possible to send any number of (timestamp, value), they still
need to honor constraints defined by the |archive policy| used by the
|metric|, such as the maximum |timespan|.
Batch
~~~~~
It is also possible to batch |measures| sending, i.e. send several |measures|
for different |metrics| in a simple call:
{{ scenarios['post-measures-batch']['doc'] }}
Or using named |metrics| of |resources|:
{{ scenarios['post-measures-batch-named']['doc'] }}
If some named |metrics| specified in the batch request do not exist, Gnocchi
can try to create them as long as an |archive policy| rule matches:
{{ scenarios['post-measures-batch-named-create']['doc'] }}
Read
----
Once |measures| are sent, it is possible to retrieve |aggregates| using *GET*
on the same endpoint:
{{ scenarios['get-measures']['doc'] }}
The list of points returned is composed of tuples with (timestamp,
|granularity|, value) sorted by timestamp. The |granularity| is the |timespan|
covered by aggregation for this point.
Refresh
~~~~~~~
Depending on the driver, there may be some lag after pushing |measures| before
they are processed and queryable. To ensure your query returns all |aggregates|
that have been pushed and processed, you can force any unprocessed |measures|
to be handled:
{{ scenarios['get-measures-refresh']['doc'] }}
.. note::
Depending on the amount of data that is unprocessed, `refresh` may add
some overhead to your query.
Filter
~~~~~~
Time range
``````````
It is possible to filter the |aggregates| over a time range by specifying the
*start* and/or *stop* parameters to the query with timestamp. The timestamp
format can be either a floating number (UNIX epoch) or an ISO8601 formated
timestamp:
{{ scenarios['get-measures-from']['doc'] }}
Aggregation
```````````
By default, the aggregated values that are returned use the *mean*
|aggregation method|. It is possible to request for any other method defined
by the policy by specifying the *aggregation* query parameter:
{{ scenarios['get-measures-max']['doc'] }}
Granularity
```````````
It's possible to provide the |granularity| argument to specify the
|granularity| to retrieve, rather than all the |granularities| available:
{{ scenarios['get-measures-granularity']['doc'] }}
Resample
~~~~~~~~
In addition to |granularities| defined by the |archive policy|, |aggregates|
can be resampled to a new |granularity|.
{{ scenarios['get-measures-resample']['doc'] }}
Time-series data can also be grouped by calendar dates beyond a standard day.
The resulting groupings are tied to the leading date of the group. For example,
grouping on month returns a monthly aggregate linked to the first of the month.
{{ scenarios['get-measures-resample-calendar']['doc'] }}
Available calendar groups are:
* `Y` – by year
* `H` – by half
* `Q` – by quarter
* `M` – by month
* `W` – by week, starting on Sunday
.. note::
If you plan to execute the query often, it is recommended for performance
to leverage an |archive policy| with the needed |granularity| instead of
resampling the time series on each query.
.. note::
Depending on the |aggregation method| and frequency of |measures|, resampled
data may lack accuracy as it is working against previously aggregated data.
.. note::
Gnocchi has an :ref:`aggregates <aggregates>` endpoint which provides
resampling as well as additional capabilities.
Archive Policy
==============
When sending |measures| for a |metric| to Gnocchi, the values are dynamically
aggregated. That means that Gnocchi does not store all sent |measures|, but
aggregates them over a certain period of time.
Gnocchi provides several |aggregation methods| that are builtin. The list of
|aggregation method| available is: *mean*, *sum*, *last*, *max*, *min*, *std*,
*median*, *first*, *count* and *Npct* (with 0 < N < 100). Those can be prefix
by `rate:` to compute the rate of change before doing the aggregation.
An |archive policy| is defined by a list of items in the `definition` field.
Each item is composed of: the |timespan|; the |granularity|, which is the level
of precision that must be kept when aggregating data; and the number of points.
The |archive policy| is determined using at least 2 of the points,
|granularity| and |timespan| fields. For example, an item might be defined
as 12 points over 1 hour (one point every 5 minutes), or 1 point every 1 hour
over 1 day (24 points).
By default, new |measures| can only be processed if they have timestamps in the
future or part of the last aggregation period. The last aggregation period size
is based on the largest |granularity| defined in the |archive policy|
definition. To allow processing |measures| that are older than the period, the
`back_window` parameter can be used to set the number of coarsest periods to
keep. That way it is possible to process |measures| that are older than the
last timestamp period boundary.
For example, if an |archive policy| is defined with coarsest aggregation of 1
hour, and the last point processed has a timestamp of 14:34, it's possible to
process |measures| back to 14:00 with a `back_window` of 0. If the
`back_window` is set to 2, it will be possible to send |measures| with
timestamp back to 12:00 (14:00 minus 2 times 1 hour).
Create
------
The REST API allows to create |archive policies| in this way:
{{ scenarios['create-archive-policy']['doc'] }}
By default, the |aggregation methods| computed and stored are the ones defined
with `default_aggregation_methods` in the configuration file. It is possible to
change the |aggregation methods| used in an |archive policy| by specifying the
list of |aggregation method| to use in the `aggregation_methods` attribute of
an |archive policy|.
{{ scenarios['create-archive-policy-without-max']['doc'] }}
The list of |aggregation methods| can either be:
- a list of |aggregation methods| to use, e.g. `["mean", "max"]`
- a list of methods to remove (prefixed by `-`) and/or to add (prefixed by `+`)
to the default list (e.g. `["+mean", "-last"]`)
If `*` is included in the list, it's substituted by the list of all supported
|aggregation methods|.
Read
----
Once the |archive policy| is created, the complete set of properties is
computed and returned, with the URL of the |archive policy|. This URL can be
used to retrieve the details of the |archive policy| later:
{{ scenarios['get-archive-policy']['doc'] }}
List
----
It is also possible to list |archive policies|:
{{ scenarios['list-archive-policy']['doc'] }}
.. _archive-policy-patch:
Update
------
Existing |archive policies| can be modified to retain more or less data
depending on requirements. If the policy coverage is expanded, |aggregates| are
not retroactively calculated as backfill to accommodate the new |timespan|:
{{ scenarios['update-archive-policy']['doc'] }}
The attribute back window of the update request is optional and can be used to
change the `back_window` of an |archive policies|. If we reduce the back
window, data is going to be deleted in the back end when the metrics daemon
Janitor runs. On the other hand, if the value is increased, more raw data
will be stored to enable reprocessing in case of data backfill.
.. note::
|Granularities| cannot be changed to a different rate. Also, |granularities|
cannot be added or dropped from a policy.
Delete
------
It is possible to delete an |archive policy| if it is not used by any |metric|:
{{ scenarios['delete-archive-policy']['doc'] }}
.. note::
An |archive policy| cannot be deleted until all |metrics| associated with it
are removed by a metricd daemon.
Archive Policy Rule
===================
Gnocchi provides the ability to define a mapping called `archive_policy_rule`.
An |archive policy| rule defines a mapping between a |metric| and an
|archive policy|. This gives users the ability to pre-define rules so an
|archive policy| is assigned to |metrics| based on a matched pattern.
An |archive policy| rule has a few properties: a name to identify it, an
|archive policy| name that will be used to store the policy name and |metric|
pattern to match |metric| names.
An |archive policy| rule for example could be a mapping to default a medium
|archive policy| for any volume |metric| with a pattern matching `volume.*`.
When a sample |metric| is posted with a name of `volume.size`, that would match
the pattern and the rule applies and sets the |archive policy| to medium. If
multiple rules match, the longest matching rule is taken. For example, if two
rules exists which match `*` and `disk.*`, a `disk.io.rate` |metric| would
match the `disk.*` rule rather than `*` rule.
Create
------
To create a rule, the following API request should be used:
{{ scenarios['create-archive-policy-rule']['doc'] }}
The `metric_pattern` is used to pattern match so as some examples,
- `*` matches anything
- `disk.*` matches disk.io
- `disk.io.*` matches disk.io.rate
Read
----
Once created, you can retrieve the rule information:
{{ scenarios['get-archive-policy-rule']['doc'] }}
List
----
It is also possible to list |archive policy| rules. The result set is ordered
by the `metric_pattern`, in reverse alphabetical order:
{{ scenarios['list-archive-policy-rule']['doc'] }}
Update
------
It is possible to rename an archive policy rule:
{{ scenarios['rename-archive-policy-rule']['doc'] }}
Delete
------
It is possible to delete an |archive policy| rule:
{{ scenarios['delete-archive-policy-rule']['doc'] }}
.. _resources-endpoint:
Resources
=========
Gnocchi provides the ability to store and index |resources|. Each |resource|
has a type. The basic type of |resources| is *generic*, but more specialized
subtypes also exist, especially to describe OpenStack resources.
Create
------
To create a generic |resource|:
{{ scenarios['create-resource-generic']['doc'] }}
The *user_id* and *project_id* attributes may be any arbitrary string. The
:ref:`timestamps<timestamp-format>` describing the lifespan of the
|resource| are optional, and *started_at* is by default set to the current
timestamp.
The *id* attribute may be a UUID or some other arbitrary string. If it is
a UUID, Gnocchi will use it verbatim. If it is not a UUID, the original
value will be stored in the *original_resource_id* attribute and Gnocchi
will generate a new UUID that is unique for the user. That is, if two
users submit create requests with the same non-UUID *id* attribute, the
resulting resources will have different UUID values in their respective
*id* attributes.
You may use either of the *id* or the *original_resource_id* attributes to
refer to the |resource|. The value returned by the create operation
includes a `Location` header referencing the *id*.
Non-generic resources
~~~~~~~~~~~~~~~~~~~~~
More specialized |resources| can be created. For example, the *instance* is
used to describe an OpenStack instance as managed by Nova_.
{{ scenarios['create-resource-instance']['doc'] }}
All specialized types have their own optional and mandatory attributes,
but they all include attributes from the generic type as well.
.. _Nova: http://launchpad.net/nova
With metrics
~~~~~~~~~~~~
Each |resource| can be linked to any number of |metrics| on creation:
{{ scenarios['create-resource-instance-with-metrics']['doc'] }}
It is also possible to create |metrics| at the same time you create a |resource|
to save some requests:
{{ scenarios['create-resource-instance-with-dynamic-metrics']['doc'] }}
Read
----
To retrieve a |resource| by its URL provided by the `Location` header at
creation time:
{{ scenarios['get-resource-generic']['doc'] }}
List
----
All |resources| can be listed, either by using the `generic` type that will
list all types of |resources|, or by filtering on their |resource| type:
{{ scenarios['list-resource-generic']['doc'] }}
Specific resource type
~~~~~~~~~~~~~~~~~~~~~~
No attributes specific to the |resource| type are retrieved when using the
`generic` endpoint. To retrieve the details, either list using the specific
|resource| type endpoint:
{{ scenarios['list-resource-instance']['doc'] }}
With details
~~~~~~~~~~~~
To retrieve a more detailed view of the resources, use `details=true` in the
query parameter:
{{ scenarios['list-resource-generic-details']['doc'] }}
Limit attributes
~~~~~~~~~~~~~~~~
To limit response attributes, use `attrs=id&attrs=started_at&attrs=user_id` in the query
parameter:
{{ scenarios['list-resource-generic-limit-attrs']['doc'] }}
Pagination
~~~~~~~~~~
Similar to |metric| list, query results are limited to `max_limit` value set
in the configuration file. Returned results represent a single page of data and
are ordered by resouces' revision_start time and started_at values:
{{ scenarios['list-resource-generic-pagination']['doc'] }}
List resource metrics
---------------------
The |metrics| associated with a |resource| can be accessed and manipulated
using the usual `/v1/metric` endpoint or using the named relationship with the
|resource|:
{{ scenarios['get-resource-named-metrics-measures']['doc'] }}
Update
------
It's possible to modify a |resource| by re-uploading it partially with the
modified fields:
{{ scenarios['patch-resource']['doc'] }}
It is also possible to associate additional |metrics| with a |resource|:
{{ scenarios['append-metrics-to-resource']['doc'] }}
History
-------
And to retrieve a |resource|'s modification history:
{{ scenarios['get-patched-instance-history']['doc'] }}
Delete
------
It is possible to delete a |resource| altogether:
{{ scenarios['delete-resource-generic']['doc'] }}
It is also possible to delete a batch of |resources| based on attribute values,
and returns a number of deleted |resources|.
Batch
~~~~~
To delete |resources| based on ids:
{{ scenarios['delete-resources-by-ids']['doc'] }}
or delete |resources| based on time:
{{ scenarios['delete-resources-by-time']['doc']}}
.. IMPORTANT::
When a |resource| is deleted, all its associated |metrics| are deleted at the
same time.
When a batch of |resources| are deleted, an attribute filter is required to
avoid deletion of the entire database.
Resource Types
==============
Gnocchi is able to manage |resource| types with custom attributes.
Create
------
To create a new |resource| type:
{{ scenarios['create-resource-type']['doc'] }}
Read
----
Then to retrieve its description:
{{ scenarios['get-resource-type']['doc'] }}
List
----
All |resource| types can be listed like this:
{{ scenarios['list-resource-type']['doc'] }}
Update
------
Attributes can be added or removed:
{{ scenarios['patch-resource-type']['doc'] }}
Delete
------
It can also be deleted if no more |resources| are associated to it:
{{ scenarios['delete-resource-type']['doc'] }}
.. note::
Creating |resource| type means creation of new tables on the indexer
backend. This is heavy operation that will lock some tables for a short
amount of time. When the |resource| type is created, its initial `state` is
`creating`. When the new tables have been created, the state switches to
`active` and the new |resource| type is ready to be used. If something
unexpected occurs during this step, the state switches to `creation_error`.
The same behavior occurs when the |resource| type is deleted. The state
starts to switch to `deleting`, the |resource| type is no longer usable.
Then the tables are removed and then finally the resource_type is really
deleted from the database. If some unexpected error occurs the state
switches to `deletion_error`.
Search
======
Gnocchi's search API supports to the ability to execute a query across
|resources| or |metrics|. This API provides a language to construct more
complex matching contraints beyond basic filtering.
Usage and format
----------------
You can specify a time range to look for by specifying the `start` and/or
`stop` query parameter, and the |aggregation method| to use by specifying the
`aggregation` query parameter.
Query can be expressed in two formats: `JSON` or `STRING`.
The supported operators are: equal to (`=`, `==` or `eq`), lesser than (`<` or
`lt`), greater than (`>` or `gt`), less than or equal to (`<=`, `le` or `≤`)
greater than or equal to (`>=`, `ge` or `≥`) not equal to (`!=`, `ne` or `≠`),
addition (`+` or `add`), substraction (`-` or `sub`), multiplication (`*`,
`mul` or `×`), division (`/`, `div` or `÷`). In JSON format, these operations
take only one argument, the second argument being automatically set to the
field value. In STRING format, this is just `<first argument> <operator>
<second argument>`
The operators or (`or` or `∨`), and (`and` or `∧`) and `not` are also
supported. In JSON format, it takes a list of arguments as parameters. Using
STRING, the format is `<left group> and/or <right group>` or `not <right
group>`. With the STRING format, parenthesis can be used to create group.
An example of the JSON format::
["and",
["=", ["host", "example1"]],
["like", ["owner", "admin-%"]],
]
And its STRING format equivalent::
host = "example1" or owner like "admin-%"
.. _search-resource:
Resource
--------
It's possible to search for |resources| using a query mechanism by using the
`POST` method and uploading a JSON formatted query or by passing a
STRING a formatted query URL-encoded in the ``filter`` parameter.
Single filter
~~~~~~~~~~~~~
When listing |resources|, it is possible to filter |resources| based on
attributes values:
{{ scenarios['search-resource-for-user']['doc'] }}
Or even:
{{ scenarios['search-resource-for-host-like']['doc'] }}
For the ``filter`` parameter version, the value is the URL-encoded version of
``{{ scenarios['search-resource-for-host-like-filter']['filter'] }}``
{{ scenarios['search-resource-for-host-like-filter']['doc'] }}
Multiple filters
~~~~~~~~~~~~~~~~
Complex operators such as `and` and `or` are also available:
{{ scenarios['search-resource-for-user-after-timestamp']['doc'] }}
``filter`` version is
``{{ scenarios['search-resource-for-user-after-timestamp-filter']['filter'] }}``
URL-encoded.
{{ scenarios['search-resource-for-user-after-timestamp-filter']['doc'] }}
With details
~~~~~~~~~~~~
Details about the |resource| can also be retrieved at the same time:
{{ scenarios['search-resource-for-user-details']['doc'] }}
Limit attributes
~~~~~~~~~~~~~~~~
To limit response attributes, use `attrs=id&attrs=started_at&attrs=user_id` in the query
parameter:
{{ scenarios['search-resource-for-user-limit-attrs']['doc'] }}
History
~~~~~~~
It's possible to search for old revisions of |resources| in the same ways:
{{ scenarios['search-resource-history']['doc'] }}
Time range
``````````
The timerange of the history can be set, too:
{{ scenarios['search-resource-history-partial']['doc'] }}
This can be done with the ``filter`` parameter too:
``{{ scenarios['search-resource-history-partial-filter']['filter'] }}``
{{ scenarios['search-resource-history-partial-filter']['doc'] }}
Magic
~~~~~
The special attribute `lifespan` which is equivalent to `ended_at - started_at`
is also available in the filtering queries.
{{ scenarios['search-resource-lifespan']['doc'] }}
Metric
------
It is possible to search for values in |metrics|. For example, this will look
for all values that are greater than or equal to 50 if we add 23 to them and
that are not equal to 55. You have to specify the list of |metrics| to look
into by using the `metric_id` query parameter several times.
{{ scenarios['search-value-in-metric']['doc'] }}
And it is possible to search for values in |metrics| by using one or more
|granularities|:
{{ scenarios['search-value-in-metrics-by-granularity']['doc'] }}
.. _aggregates:
Dynamic Aggregates
==================
Gnocchi supports the ability to make on-the-fly reaggregations of existing
|metrics| and the ability to manipulate and transform |metrics| as required.
This is accomplished by passing an `operations` value describing the actions
to apply to the |metrics|.
.. note::
`operations` can also be passed as a string, for example:
`"operations": "(aggregate mean (metric (metric-id aggregation) (metric-id aggregation))"`
Cross-metric Usage
------------------
Aggregation across multiple |metrics| have different behavior depending
on whether boundary values are set (`start` and `stop`) and if `needed_overlap`
is set.
Overlap percentage
~~~~~~~~~~~~~~~~~~
Gnocchi expects that time series have a certain percentage of timestamps in
common. This percent is controlled by the `needed_overlap` needed_overlap,
which by default expects 100% overlap. If this percentage is not reached, an
error is returned.
.. note::
If `start` or `stop` boundary is not set, Gnocchi will set the missing
boundary to the first or last timestamp common across all series.
Backfill
~~~~~~~~
The ability to fill in missing points from a subset of time series is supported
by specifying a `fill` value. Valid fill values include any float, `dropna`,
`null`, `ffill`, `bfill` `full_ffill` or `full_bfill`. In the case of `null`,
Gnocchi will compute the aggregation using only the existing points. `dropna` is
like `null` but remove NaN from the result. `ffill` fills NaN measures in one
metric with previous non-NaN value, `bfill` fills NaN measures with next non-NaN
value, so if the metric starts (or, in case of `bfill`, ends) with NaNs, those
wouldn't change and would be excluded from the resulting set, just like `dropna`
does. To fill the remaining NaN values, producing a metric with as much
timestamps as there are in all metrics combined, one can use `full_ffill` and
`full_bfill` variants. In the case of `full_ffill`, it applies a forward then backward
fill and for `full_bfill`, a backward then forward fill.
{{ scenarios['get-aggregates-by-metric-ids-fill']['doc'] }}
Search and aggregate
--------------------
It's also possible to do that aggregation on |metrics| linked to |resources|.
In order to select these |resources|, the following endpoint accepts a query
such as the one described in the :ref:`resource search API <search-resource>`.
{{ scenarios['get-aggregates-by-attributes-lookup']['doc'] }}
And metric name can be `wildcard` too.
{{ scenarios['get-aggregates-by-attributes-lookup-wildcard']['doc'] }}
Groupby
~~~~~~~
It is possible to group the |resource| search results by any attribute of the
requested |resource| type, and then compute the aggregation:
{{ scenarios['get-aggregates-by-attributes-lookup-groupby']['doc'] }}
Gnocchi will only group by the last attribute's value for each resource.
Therefore, in cases that the resource's attributes get changed over the
provided time window, Gnocchi will only display the last attributes' value.
Below we have an example of a resource that had the `flavor_id` attribute
changed from `1` to `2` after `2015`.
{{ scenarios['get-aggregates-by-attributes-lookup-groupby-without-history']['doc'] }}
Between the measurements from `2015` and `2099`, the `flavor_id` was changed
from `1` to `2`, but in the response, the `flavor_id` is as if it was always
`2`. In these cases, if you want to get all the attributes' values over the
time window, and group each measurement by the attribute value the resource
has when the measurement was collected, you can add the `use_history=true`
in the aggregates API request, like the following example.
{{ scenarios['get-aggregates-by-attributes-lookup-groupby-with-history']['doc'] }}
When using the `use_history=True`, the re-aggregation process will not
work as expected, as the values in the timespan that represents the
granularity being consulted are added together to enable the split into
different portions according to their participation in the timeframe being
consulted. Therefore, if one wants to use the re-aggregation process, he/she
needs to consider using the `use_history` option as `False`.
List of supported <operations>
------------------------------
Get one or more metrics
~~~~~~~~~~~~~~~~~~~~~~~
::
(metric <metric-id> <aggregation>)
(metric ((<metric-id> <aggregation>) (<metric-id> <aggregation>) ...))
metric-id: the id of a metric to retrieve
aggregation: the aggregation method to retrieve
.. note::
When used alone, this provides the ability to retrieve multiple |metrics| in a
single request.
Rolling window aggregation
~~~~~~~~~~~~~~~~~~~~~~~~~~
::
(rolling <aggregation method> <rolling window> (<operations>))
aggregation method: the aggregation method to use to compute the rolling window.
(mean, median, std, min, max, sum, var, count)
rolling window: number of previous values to aggregate
Aggregation across metrics
~~~~~~~~~~~~~~~~~~~~~~~~~~
::
aggregate <aggregation method> ((<operations>), (<operations>), ...))
aggregation method: the aggregation method to use to compute the aggregate between metrics
(mean, median, std, min, max, sum, var, count)
Resample
~~~~~~~~
::
(resample <aggregation method> <granularity> (<operations>))
aggregation method: the aggregation method to use to compute the aggregate between metrics
(mean, median, std, min, max, sum, var, count)
granularity: the granularity (e.g.: 1d, 60s, ...)
.. note::
If you plan to execute the query often, it is recommended for performance
to leverage an |archive policy| with the needed |granularity| instead of
resampling the time series on each query.
Math operations
~~~~~~~~~~~~~~~
::
(<operator> <operations_or_float> <operations_or_float>)
operator: %, mod, +, add, -, sub, *, ×, mul, /, ÷, div, **, ^, pow
Boolean operations
~~~~~~~~~~~~~~~~~~
::
(<operator> <operations_or_float> <operations_or_float>)
operator: =, ==, eq, <, lt, >, gt, <=, ≤, le, =, ≥, ge, !=, ≠, ne
Function operations
~~~~~~~~~~~~~~~~~~~
::
(abs (<operations>))
(absolute (<operations>))
(neg (<operations>))
(negative (<operations>))
(cos (<operations>))
(sin (<operations>))
(tan (<operations>))
(floor (<operations>))
(ceil (<operations>))
(clip (<operations>))
(clip_min (<operations>))
(clip_max (<operations>))
(rateofchange (<operations>))
Examples
--------
Aggregate then math
~~~~~~~~~~~~~~~~~~~
The following computes the mean aggregates with `all` metrics listed in
`metrics` and then multiples it by `4`.
{{ scenarios['get-aggregates-by-metric-ids']['doc'] }}
Between metrics
~~~~~~~~~~~~~~~
Operations between metrics can also be done, such as:
{{ scenarios['get-aggregates-between-metrics']['doc'] }}
List the top N resources that consume the most CPU during the last hour
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following is configured so that `stop` - `start` = `granularity` will get
only one point per instance.
This will give all information needed to order by `cpu.util` timeseries which
can be filtered down to N results.
{{ scenarios['use-case1-top-cpuutil-per-instances']['doc'] }}
Aggregation across metrics (deprecated)
=======================================
.. Note::
This API have been replaced by the more flexible :ref:`aggregates API <aggregates>`
Gnocchi supports on-the-fly aggregation of previously aggregated data of
|metrics|.
It can be done by providing the list of |metrics| to aggregate:
{{ scenarios['get-across-metrics-measures-by-metric-ids']['doc'] }}
.. Note::
This aggregation is done against the |aggregates| built and updated for
a |metric| when new measurements are posted in Gnocchi. Therefore, the
aggregate of this already aggregated data may not have sense for certain
kind of |aggregation method| (e.g. stdev).
By default, the |measures| are aggregated using the |aggregation method|
provided, e.g. you'll get a mean of means, or a max of maxs. You can specify
what method to use over the retrieved aggregation by using the `reaggregation`
parameter:
{{ scenarios['get-across-metrics-measures-by-metric-ids-reaggregate']['doc'] }}
It's also possible to do that aggregation on |metrics| linked to |resources|.
In order to select these |resources|, the following endpoint accepts a query
such as the one described in the :ref:`resource search API <search-resource>`.
{{ scenarios['get-across-metrics-measures-by-attributes-lookup']['doc'] }}
Like for searching resource, the query
``{{ scenarios['get-across-metrics-measures-by-attributes-lookup-filter']['filter'] }}``
can be passed in ``filter`` parameter
{{ scenarios['get-across-metrics-measures-by-attributes-lookup-filter']['doc'] }}
It is possible to group the |resource| search results by any attribute of the
requested |resource| type, and then compute the aggregation:
{{ scenarios['get-across-metrics-measures-by-attributes-lookup-groupby']['doc'] }}
Similar to retrieving |aggregates| for a single |metric|, the `refresh`
parameter can be provided to force all POSTed |measures| to be processed across
all |metrics| before computing the result. The `resample` parameter may be used
as well.
.. note::
Tranformations (eg: resample, absolute, ...) are done prior to any
reaggregation if both parameters are specified.
Also, aggregation across |metrics| have different behavior depending
on whether boundary values are set ('start' and 'stop') and if 'needed_overlap'
is set.
Gnocchi expects that we have a certain percent of timestamps common between
time series. This percent is controlled by needed_overlap, which by default
expects 100% overlap. If this percentage is not reached, an error is returned.
.. note::
If `start` or `stop` boundary is not set, Gnocchi will set the missing
boundary to the first or last timestamp common across all series.
The ability to fill in missing points from a subset of time series is supported
by specifying a `fill` value. Valid fill values include any float, `dropna`,
`null`, `ffill`, `bfill` `full_ffill` or `full_bfill`. In the case of `null`,
Gnocchi will compute the aggregation using only the existing points. `dropna` is
like `null` but remove NaN from the result. `ffill` fills NaN measures in one
metric with previous non-NaN value, `bfill` fills NaN measures with next non-NaN
value, so if the metric starts (or, in case of `bfill`, ends) with NaNs, those
wouldn't change and would be excluded from the resulting set, just like `dropna`
does. To fill the remaining NaN values, producing a metric with as much
timestamps as there are in all metrics combined, one can use `full_ffill` and
`full_bfill` variants. In the case of `full_ffill`, it applies a forward then backward
fill and for `full_bfill`, a backward then forward fill.
{{ scenarios['get-across-metrics-measures-by-metric-ids-fill']['doc'] }}
Capabilities
============
The list |aggregation methods| that can be used in Gnocchi are extendable and
can differ between deployments. It is possible to get the supported list of
|aggregation methods| from the API server:
{{ scenarios['get-capabilities']['doc'] }}
Status
======
The overall status of the Gnocchi installation can be retrieved via an API call
reporting values such as the number of new |measures| to process for each
|metric|:
{{ scenarios['get-status']['doc'] }}
.. _timestamp-format:
Timestamp format
================
Timestamps used in Gnocchi are always returned using the ISO 8601 format.
Gnocchi is able to understand a few formats of timestamp when querying or
creating |resources|, for example
- "2014-01-01 12:12:34" or "2014-05-20T10:00:45.856219", ISO 8601 timestamps.
- "10 minutes", which means "10 minutes from now".
- "-2 days", which means "2 days ago".
- 1421767030, a Unix epoch based timestamp.
.. include:: include/term-substitution.rst
|