Generative Prompt

Sends a prompt to an LLM and returns the generated response.

Inputs

NameTypeDescription
PromptStringThe LLM prompt
TemperatureNumberMeasure of randomness in generated text (tune only if you know the impact)
ModelStringThe LLM model to use
Response FormatOptional ObjectOptional response format (JSON schema). Omit for free-form text.

Model options: gemini-2.5-flash, azure-gpt-4o-mini, azure-gpt-4o, azure-gpt-5-mini, azure-gpt-5


Outputs

NameTypeDescription
ResponseStringThe LLM response
SuccessBooleanWhether the call was successful

Response format example

Use the Response Format input to constrain the LLM output to a JSON structure. Example:

{
  "type": "json_schema",
  "json_schema": {
    "name": "ConversationEvaluation",
    "description": "An evaluation of an E-commerce customer service conversation for a given criteria.",
    "schema": {
      "type": "object",
      "properties": {
        "explanation": {
          "description": "The explanation for the verdict, based on the rubric and conversation evidence.",
          "type": "string"
        },
        "score": {
          "description": "A score for the criterion on a 0.0-1.0 scale (in 0.1 increments)",
          "type": "number"
        }
      },
      "required": ["explanation", "score"],
      "additionalProperties": false
    }
  }
}

This produces structured output with explanation (string) and score (number).


Notes

  • The formatted response is returned as a string. If you expect an object, use the JSON Parse utility activity.