1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135
|
<html><body>
<style>
body, h1, h2, h3, div, span, p, pre, a {
margin: 0;
padding: 0;
border: 0;
font-weight: inherit;
font-style: inherit;
font-size: 100%;
font-family: inherit;
vertical-align: baseline;
}
body {
font-size: 13px;
padding: 1em;
}
h1 {
font-size: 26px;
margin-bottom: 1em;
}
h2 {
font-size: 24px;
margin-bottom: 1em;
}
h3 {
font-size: 20px;
margin-bottom: 1em;
margin-top: 1em;
}
pre, code {
line-height: 1.5;
font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
}
pre {
margin-top: 0.5em;
}
h1, h2, h3, p {
font-family: Arial, sans serif;
}
h1, h2, h3 {
border-bottom: solid #CCC 1px;
}
.toc_element {
margin-top: 0.5em;
}
.firstline {
margin-left: 2 em;
}
.method {
margin-top: 1em;
border: solid 1px #CCC;
padding: 1em;
background: #EEE;
}
.details {
font-weight: bold;
font-size: 14px;
}
</style>
<h1><a href="checks_v1alpha.html">Checks API</a> . <a href="checks_v1alpha.aisafety.html">aisafety</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
<code><a href="#classifyContent">classifyContent(body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Analyze a piece of content with the provided set of policies.</p>
<p class="toc_element">
<code><a href="#close">close()</a></code></p>
<p class="firstline">Close httplib2 connections.</p>
<h3>Method Details</h3>
<div class="method">
<code class="details" id="classifyContent">classifyContent(body=None, x__xgafv=None)</code>
<pre>Analyze a piece of content with the provided set of policies.
Args:
body: object, The request body.
The object takes the form of:
{ # Request proto for ClassifyContent RPC.
"classifierVersion": "A String", # Optional. Version of the classifier to use. If not specified, the latest version will be used.
"context": { # Context about the input that will be used to help on the classification. # Optional. Context about the input that will be used to help on the classification.
"prompt": "A String", # Optional. Prompt that generated the model response.
},
"input": { # Content to be classified. # Required. Content to be classified.
"textInput": { # Text input to be classified. # Content in text format.
"content": "A String", # Actual piece of text to be classified.
"languageCode": "A String", # Optional. Language of the text in ISO 639-1 format. If the language is invalid or not specified, the system will try to detect it.
},
},
"policies": [ # Required. List of policies to classify against.
{ # List of policies to classify against.
"policyType": "A String", # Required. Type of the policy.
"threshold": 3.14, # Optional. Score threshold to use when deciding if the content is violative or non-violative. If not specified, the default 0.5 threshold for the policy will be used.
},
],
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response proto for ClassifyContent RPC.
"policyResults": [ # Results of the classification for each policy.
{ # Result for one policy against the corresponding input.
"policyType": "A String", # Type of the policy.
"score": 3.14, # Final score for the results of this policy.
"violationResult": "A String", # Result of the classification for the policy.
},
],
}</pre>
</div>
<div class="method">
<code class="details" id="close">close()</code>
<pre>Close httplib2 connections.</pre>
</div>
</body></html>
|