- Have basic Python programming skills and a basic understanding of object-oriented programming.
- Be familiar with the API documentation and authentication methods provided by the model provider you want to integrate.
- Have installed and configured the Dify plugin development toolkit (refer to Initializing Development Tools).
- (Optional) Read the Model Plugin Introduction document to understand the basic concepts and architecture of model plugins.
Step 1: Create Directory Structure
A well-organized directory structure is the foundation for developing maintainable plugins. You need to create specific directories and files for your model provider plugin.- Locate or Create Provider Directory: In the
models/directory of your plugin project (typically a local clone ofdify-official-plugins), find or create a folder named after the model provider (e.g.,models/my_new_provider). - Create
modelsSubdirectory: In the provider directory, create amodelssubdirectory. - Create Subdirectories by Model Type: In the
models/models/directory, create a subdirectory for each model type you need to support. Common types include:llm: Text generation modelstext_embedding: Text Embedding modelsrerank: Rerank modelsspeech2text: Speech-to-text modelstts: Text-to-speech modelsmoderation: Content moderation models
- Prepare Implementation Files:
- In each model type directory (e.g.,
models/models/llm/), you need to create a Python file to implement the calling logic for that type of model (e.g.,llm.py). - Also in that directory, you need to create a YAML configuration file for each specific model of that type (e.g.,
my-model-v1.yaml). - (Optional) You can create a
_position.yamlfile to control the display order of models of that type in the Dify UI.
- In each model type directory (e.g.,
my_provider supports LLM and Embedding):
Step 2: Define Model Configuration (YAML)
For each specific model, you need to create a YAML file to describe its properties, parameters, and features so that Dify can correctly understand and use it.- Create YAML File: In the corresponding model type directory (e.g.,
models/models/llm/), create a YAML file for the model you want to add. The filename typically matches or is descriptive of the model ID (e.g.,my-llm-model-v1.yaml). - Write Configuration Content: Follow the AIModelEntity Schema Definition specification to write the content. Key fields include:
model: (Required) The official API identifier for the model.label: (Required) The name displayed in the Dify UI (supports multiple languages).model_type: (Required) Must match the directory type (e.g.,llm).features: (Optional) Declare special features supported by the model (e.g.,vision,tool-call,stream-tool-call, etc.).model_properties: (Required) Define inherent model properties, such asmode(chatorcompletion),context_size.parameter_rules: (Required) Define user-adjustable parameters and their rules (namename, typetype, whether requiredrequired, default valuedefault, rangemin/max, optionsoptions, etc.). You can useuse_templateto reference predefined templates to simplify configuration of common parameters (such astemperature,max_tokens).pricing: (Optional) Define billing information for the model.
claude-3-5-sonnet-20240620.yaml):
Step 3: Write Model Calling Code (Python)
This is the core step for implementing model functionality. You need to write code in the corresponding model type’s Python file (e.g.,llm.py) to handle API calls, parameter conversion, and result returns.
-
Create/Edit Python File: Create or open the corresponding Python file (e.g.,
llm.py) in the model type directory (e.g.,models/models/llm/). -
Define Implementation Class:
- Define a class, for example,
MyProviderLargeLanguageModel. - This class must inherit from the model type base class in the Dify plugin SDK. For example, for LLM, you need to inherit from
dify_plugin.provider_kits.llm.LargeLanguageModel.
- Define a class, for example,
-
Implement Key Methods: (The specific methods to implement depend on the base class inherited, using LLM as an example below)
-
_invoke(...): Core calling method.- Signature:
def _invoke(self, model: str, credentials: dict, prompt_messages: List[PromptMessage], model_parameters: dict, tools: Optional[List[PromptMessageTool]] = None, stop: Optional[List[str]] = None, stream: bool = True, user: Optional[str] = None) -> Union[LLMResult, Generator[LLMResultChunk, None, None]]: - Responsibilities:
- Prepare API requests using
credentialsandmodel_parameters. - Convert Dify’s
prompt_messagesformat to the format required by the provider API. - Handle the
toolsparameter to support Function Calling / Tool Use (if the model supports it). - Decide whether to make a streaming call or a synchronous call based on the
streamparameter. - Streaming Return: If
stream=True, this method must return a generator (Generator) that usesyieldto returnLLMResultChunkobjects piece by piece. Each chunk contains partial results (text, tool calling blocks, etc.) and optional usage information. - Synchronous Return: If
stream=False, this method must return a completeLLMResultobject, containing the final text result, a complete list of tool calls, and total usage information (LLMUsage).
- Prepare API requests using
- Implementation Pattern: It is strongly recommended to split synchronous and streaming logic into internal helper methods.
- Signature:
-
validate_credentials(self, model: str, credentials: dict) -> None: (Required) Used to validate the validity of credentials when a user adds or modifies them. This is typically implemented by calling a simple, low-cost API endpoint (such as listing available models, checking balance, etc.). If validation fails, aCredentialsValidateFailedErroror its subclass should be thrown. -
get_num_tokens(self, model: str, credentials: dict, prompt_messages: List[PromptMessage], tools: Optional[List[PromptMessageTool]] = None) -> int: (Optional but recommended) Used to estimate the number of tokens for a given input. If it cannot be calculated accurately or the API does not support it, you can return 0. -
@property _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]: (Required) Define an error mapping dictionary. The keys are standardInvokeErrorsubclasses from Dify, and the values are lists of exception types that may be thrown by the vendor SDK that need to be mapped to that standard error. This is crucial for Dify to handle errors from different providers uniformly.
-
Step 4: Debug Plugin
Thorough testing and debugging are essential before contributing your plugin to the community. Dify provides remote debugging capabilities, allowing you to modify code locally and test the effects in real-time in a Dify instance.-
Get Debugging Information:
- In your Dify instance, go to the “Plugin Management” page (may require administrator privileges).
- Click “Debug Plugin” in the top right corner of the page to get your
Debug KeyandRemote Server Address(e.g.,http://<your-dify-domain>:5003).
-
Configure Local Environment:
-
In the root directory of your local plugin project, find or create a
.envfile (can be copied from.env.example). -
Edit the
.envfile, filling in the debugging information:
-
In the root directory of your local plugin project, find or create a
-
Start Local Plugin Service:
- In the plugin project root directory, make sure your Python environment is activated (if using a virtual environment).
-
Run the main program:
- Observe the terminal output. If the connection is successful, there will typically be a corresponding log prompt.
-
Test in Dify:
- Refresh the “Plugins” or “Model Providers” page in Dify, and you should see your local plugin instance, possibly with a “Debugging” identifier.
- Go to “Settings” -> “Model Providers”, find your plugin, and configure valid API credentials.
- Select and use your model in a Dify application for testing. Your local modifications to the Python code (which will typically auto-reload the service after saving) will directly affect the calling behavior in Dify. Using Dify’s debug preview feature can help you check input/output and error messages.
Step 5: Package and Publish
When you’ve completed development and debugging, and are satisfied with the plugin’s functionality, you can package it and contribute it to the Dify community.- Package Plugin:
-
Stop the local debugging service (
Ctrl+C). -
Run the packaging command in the plugin project root directory:
-
This will generate a
<provider_name>.difypkgfile in the project root directory.
-
Stop the local debugging service (
- Submit Pull Request:
- Ensure your code style is good and follows Dify’s plugin publishing specifications.
- Push your local Git commits to your forked
dify-official-pluginsrepository. - Initiate a Pull Request to the main
langgenius/dify-official-pluginsrepository on GitHub. Clearly describe the changes you’ve made, the models or features you’ve added, and any necessary testing instructions in the PR description. - Wait for review by the Dify team. After review and merging, your contribution will be included in the official plugins and available on the Dify Marketplace.
Explore More
- Model Schema Definition (Model YAML specifications)
- Plugin Manifest Structure (
manifest.yamlspecifications) - Dify Plugin SDK Reference (Look up base classes, data structures, and error types)
- Dify Official Plugins Repository (View implementations of existing plugins)
Edit this page | Report an issue