LLM Prompt
LLM Prompt
The LLM activity allows you to add activities to workflows that interact with Large Language Models (LLMs) to write prompts, manage conversational context through chat messages, and define the structure of the expected output.
Key Properties
Complete the following properties to use this activity:
- Model Parameters:
-
Model Name: Choose the OpenAI model that will be used for generating the response.
-
API Version (optional): Specify the LLM model's API version. Configure this if you need to override the default version set on the Target.
-
This property is primarily for Azure AI integrations and should be used only if you need to override the default version.
-
Request:
-
Chat Messages set the tone, focus, and boundaries for the LLM’s responses within a workflow. Provide a list of messages, where each message includes two fields:
-
Content: Contains the actual text or instructions of the message, for example, “act as a network admin”.
-
Role: Specifies the persona or function (e.g., "user", "assistant", "system") for the message, helping to shape the context and perspective of the interaction.
-
-
System Prompt: Enter initial instructions or context for the LLM to guide its responses. You can use variable references for dynamic requests.
-
Below are some examples illustrating how the chat message fields can be used:
Example: 1

Example: 2

-
Structured Output (optional):
The structured output fields are optional but recommended if you need to extract structured information from the LLM’s response.
- Click Add to define the JSON format that the AI model should use in its responses.
- Enter the JSON key in the Output Name field and select the data type in the Output Type dropdown (boolean, integer, number, or string), as shown below.

Next Steps
The Cisco Workflows Automation Exchange offers pre-built, Cisco-validated workflows. To explore the LLM Prompt activity, you can install the MX Firewall – Block Outbound Traffic by Tags workflow. This workflow discovers Meraki MX networks with matching tags and deploys outbound firewall rules to block traffic to a specified IP address or CIDR across all relevant networks.
The example below, from the MX Firewall – Block Outbound Traffic by Tags workflow available on the Exchange, shows how the AI LLM Prompt activity can be used to extract the CIDR block and network region from ServiceNow incident text using structured LLM parsing.
Example: 3


