Generative Prompt
Sends a prompt to an LLM and returns the generated response.
Inputs
| Name | Type | Description |
|---|---|---|
| Prompt | String | The LLM prompt |
| Temperature | Number | Measure of randomness in generated text (tune only if you know the impact) |
| Model | String | The LLM model to use |
| Response Format | Optional Object | Optional response format (JSON schema). Omit for free-form text. |
Model options: gemini-2.5-flash, azure-gpt-4o-mini, azure-gpt-4o, azure-gpt-5-mini, azure-gpt-5
Outputs
| Name | Type | Description |
|---|---|---|
| Response | String | The LLM response |
| Success | Boolean | Whether the call was successful |
Response format example
Use the Response Format input to constrain the LLM output to a JSON structure. Example:
{
"type": "json_schema",
"json_schema": {
"name": "ConversationEvaluation",
"description": "An evaluation of an E-commerce customer service conversation for a given criteria.",
"schema": {
"type": "object",
"properties": {
"explanation": {
"description": "The explanation for the verdict, based on the rubric and conversation evidence.",
"type": "string"
},
"score": {
"description": "A score for the criterion on a 0.0-1.0 scale (in 0.1 increments)",
"type": "number"
}
},
"required": ["explanation", "score"],
"additionalProperties": false
}
}
}This produces structured output with explanation (string) and score (number).
Notes
- The formatted response is returned as a string. If you expect an object, use the JSON Parse utility activity.
Updated about 3 hours ago