# AI Model

The AI Model block enables you to select the model that will drive your AI agent's responses. Choose from a variety of models to ensure your agent delivers accurate and contextually relevant interactions. Additionally, you can configure the prompt at this step to tailor the agent's communication style and focus.

# Power Your AI Agents with LLMs

Large Language Models (LLMs) are the foundation of FabriXAI’s AI agents, enabling them to communicate through natural language interactions. These models process user inputs and generate responses based on a "Prompt" you provide.

# Steps to Configure the AI Model:

  1. Navigate to the AI Model block in the AI agent builder.
  2. Click "Edit".
  3. Write a Prompt that provides instructions for your AI agent to follow.

    Note: Please refer to Notes on Writing Effective Prompts for more explanations.

  4. (Optional) If your AI agent is configured as a Chatbot, you can add a Greeting Message. This message will be sent to the user at the beginning of each conversation.
  5. Select the desired AI Model that best suits your task.

    Note: For the list of supported AI Model please refer to What AI Models are Supported?.

  6. (Optional) Open the "Advanced" accordion to further adjust the model’s parameters.

    Note: Please refer to Guidance on Setting Parameters for LLM for more explanations.

  7. Click "Continue" to proceed.
  8. (Coming Soon) You can add external tools or APIs to enhance the capabilities of your AI agent. Learn more in Power-Up.
  9. Click "Save" to finalize the AI Model configuration.

# Notes on Writing Effective Prompts

  • Be specific about the task you want the AI agent to perform.
  • Include examples to guide the model’s behavior.
  • Use structured formats when appropriate (e.g., bullet points or numbered lists).

For more tips on crafting effective prompts, visit FabriXAI's Prompt Engineering Blog (opens new window).

# Guidance on Setting Parameters for LLM

Setting parameters determines the style and tone of your AI agent's responses. We offer pre-set values for various response styles, and you can also customize the settings using the following options:

  1. Temperature: Controls the randomness of the output. A lower value (e.g., 0.2) results in more focused and deterministic responses, while a higher value (e.g., 0.8) encourages creativity and diversity in the replies.
  2. Top P: This parameter (also known as nucleus sampling) specifies the cumulative probability of token selection. Setting it to a value like 0.9 means the model will consider only the top 90% of the probability mass for generating the next token, allowing for more controlled randomness.
  3. Presence Penalty: Adjusts the likelihood of the model introducing new topics. A positive value discourages repetition (e.g., 0.5), making it more likely to introduce new concepts, while a value closer to 0 allows for more repetition.
  4. Frequency Penalty: Reduces the likelihood of the model repeating the same token based on its frequency in the text. A value of 0.5 will slightly penalize common tokens, promoting variety in the output.
  5. Max Tokens: Specifies the maximum length of the generated response. Setting this value appropriately ensures that responses are concise and relevant to your needs. For example, a value of 150 or 300 tokens might work well depending on the context.