Skip to content

AI Personas

AI Personas define the identity, intelligence, behavior, and capabilities of your custom agents in Fabrix.ai. A persona decides how the AI thinks, what tools it can use, which models power it, and how it processes conversation history.

Personas act as the brain of your agent, combining LLMs, toolsets, and prompt templates into a single intelligent workflow.

πŸ“Œ Where Personas Are Managed

Inside each AI Project β†’ MCP β†’ Personas, users can:

  • View all personas in the project
  • Add new personas
  • Edit persona configuration
  • Assign LLMs
  • Define guardrails
  • Configure toolset & prompt access policies

🧩 Persona Configuration Fields (Add Persona Modal)

When creating a new persona, you fill out several key fields:

1️⃣ Basic Details

Name

A human-friendly name of the persona.

Examples: - "Resume Agent" - "AIOps Troubleshooter" - "Salesforce Bellbox Agent"

Description

Used for listing and UI display.

This does not control behavior (that comes from prompts and system instructions).

Color

A unique color tag used only for UI identity.

2️⃣ Introductory Prompt

This is the default prompt message(s) that are preloaded when the conversation starts.

πŸš€ Purpose of Introductory Prompt

  • Shows predefined instructions to the user
  • Provides starting buttons or guidance
  • Executes automatically when clicked
  • Often includes sample queries or onboarding steps

Example:

Welcome! I can help you analyze logs, troubleshoot incidents, and identify root causes.

Try asking:
- "Analyze errors in the last 1 hour"
- "Find anomalies in CPU metrics"

Multiple prompts can be addedβ€”each is a quick-start action.

3️⃣ LLM Selection

You must select one or more LLM models that the persona can use.

This determines:

  • Model availability
  • Routing rules
  • Final response formatting

Example options: - gpt-4.1 - claude-sonnet-4 - gpt-4o - prod-claude-sonnet-4

βœ” You can select multiple models.

βœ” The system internally chooses the best model depending on instruction.

4️⃣ Guardrails (Optional)

Guardrails are rule-based constraints to control the assistant's behavior.

Examples: - Prevent sharing classified data - Limit actions to specific domains - Enforce safe response patterns

If no guardrails are created, this section will show "No data available".

πŸ›‘ Access Policy (MOST IMPORTANT SECTION)

The Access Policy defines:

  • βœ” Which MCP server this persona talks to
  • βœ” Which toolsets it is allowed to use
  • βœ” Which prompt templates it can access
  • βœ” Whether to optimize conversation using AI
  • βœ” Whether to format final responses using another LLM
  • βœ” Whether mandatory internal tools auto-execute

This is where you connect Toolsets + Prompt Templates + System Behavior.

✨ Access Policy JSON Example

[
  {
    "mcpserver": "rdaf",
    "toolset_pattern": "aiops.*|snmp.*|syslog.*|backup.*|network.*|context.*|common.*|post_to.*",
    "prompt_templates_pattern": "incident_remediate_recommend.*",
    "optimizeHistoryUsingAI": true,
    "formatFinalResponseUsingAI": true,
    "enableLearning": true,
    "system_instruction_name": "default system interaction",
    "prepolulateMandatoryTools": true
  }
]

πŸ” Detailed Explanation of Access Policy Fields

πŸ“Œ mcpserver

Specifies which MCP server the persona is connected to.

Example:

"mcpserver": "rdaf"

πŸ“Œ toolset_pattern

Regex pattern of toolsets the persona is allowed to use.

Example:

"toolset_pattern": "aiops.*|network_automation|context_cache.*"

Allows: - AIOps tools - Network automation tools - Context cache tools

πŸ“Œ prompt_templates_pattern

Defines which prompt templates this persona can use via regex.

Example:

"prompt_templates_pattern": "resume_builder.*|incident_.*"

πŸ“Œ optimizeHistoryUsingAI

Enables internal SLM models to compress earlier conversation content to save tokens.

  • βœ” Better long conversations
  • βœ” Lower cost
  • βœ” Faster responses
"optimizeHistoryUsingAI": true

πŸ“Œ formatFinalResponseUsingAI

Before sending the final content to the user, the system sends the LLM output to another model for formatting (HTML, markdown, structure, etc.).

Improves: - HTML dashboards - Structured reports - Readability

"formatFinalResponseUsingAI": true

πŸ“Œ enableLearning

Allows the persona to improve using local learning mechanisms.

πŸ“Œ system_instruction_name

References a predefined system instruction stored in MCP.

Example:

"system_instruction_name": "default system interaction"

πŸ“Œ prepolulateMandatoryTools

Automatically runs required MCP tools without making the main LLM decide.

Tools that auto-run every time: - get_conversation_history - list_prompt_templates_by_persona - get_persona_details

This guarantees: - βœ” Consistent context - βœ” Stable multi-step workflows - βœ” Faster tool selection

"prepolulateMandatoryTools": true

🧩 Final Persona Workflow Summary

When the persona is created, it determines:

  • 🧠 Behavior β†’ From Introductory Prompt + System Instructions
  • πŸ›  Tools it can use β†’ From toolset_pattern
  • πŸ“‹ Prompt workflows β†’ From prompt_templates_pattern
  • πŸ€– Models powering it β†’ From LLM selection
  • πŸ›‘ Safety rules β†’ From Guardrails
  • βš™οΈ Internal optimization β†’ From optimizeHistoryUsingAI & formatFinalResponseUsingAI