# FAQ
# 1. Platform Consultation
1.Agent Calling Consultation: Can Xingchen Agent be called directly like a large model?
Answer: Yes, the Agent can be called directly after being published as an API.
2.Account Consultation: What is the maximum number of Agents that can be created with one account?
Answer: 100.
3.Does the Xingchen Agent Development Platform support local deployment?
Answer: Yes, it does. You can fill in the form at the bottom of the official website homepage to contact platform personnel for detailed information.
4.Does the Xingchen Agent Platform support multi-agent collaboration?
Answer: Currently, workflow agents support nesting another workflow agent by using the [Workflow Node]. Note that the workflow node must be successfully published before it can be referenced.
5.Can the Agent of iFlytek Xingchen Agent Development Platform be quickly embedded and integrated into mini-programs or other systems through JS?
Answer: Yes, the current platform supports publishing as an API, which can be integrated and called through the API.
6.What to do if "Sorry, the service is temporarily unavailable" appears? What if there is no output on the conversation page? What if the conversation page responds with a timeout?
Answer: In general, there are several main reasons, and users can check first: 1. There is a problem with the network itself, please check; 2. The version of the conversation page is inconsistent with the version of the debugging page, so it is necessary to ensure that the debugging version is published and approved; 3. There are nodes with long time consumption in the workflow agent, leading to response timeout. Solution: Go to the agent debugging page, add a message node after the node with long time consumption, and turn on streaming output to ensure that the agent can continuously return frames during Q&A; 4. The answered content may contain politically sensitive or sensitive vocabulary and be intercepted by the audit; 5. The tool node may take too long to process, resulting in timeout. It is recommended to check the debugging page to see if any relevant nodes take a long time, and add a message node after it and turn on streaming output to try; 6. If the problem still occurs after self-checking as above, contact the administrator in the communication group.
7.Introduction to Multi-turn Conversation, Historical Conversation, and Context Memory Functions
Answer: When the agent built by the user is a prompt agent, you can find [Advanced Configuration] on the prompt agent building page and turn on [Support Multi-turn Conversation].
8.Can the default intent in the decision node be unconnected? Must the default intent be connected?
Answer: The default intent also needs to be connected as an independent branch link.
9.Can the Agent created on the platform be sold to others? Is it illegal? Who is the owner of the Agent? To whom does the Agent belong?
Answer: Within the scope permitted by law, all rights to the text, pictures, audio and other content (input content) entered on the Agent of the platform and the content generated by you using this service (output content) belong to you or the original right holder. The input content will not have the transfer of intellectual property rights and other rights due to uploading, publishing and other behaviors. For the created Agent, users can integrate it through the API on their own, and whether it is illegal needs to be determined according to specific laws and regulations for your behavior.
# 2. Common Issues of Workflow Agents
# 2.1 Common Issues of Templates
1.Does the platform have relevant workflow templates, such as customer service scenarios?
Answer: Yes, in addition to this, the platform also provides other rich workflow scenario templates. For specific access: Platform Homepage --> My Agents --> Create New Agent --> Workflow Creation --> Multiple templates are available for selection.
2.Are there interfaces that can directly call the workflow templates?
Answer: You can create the same version using the template and publish it as an API by yourself, which can be called through the API.
# 2.2 Common Issues of Large Model Nodes
1.What to do if the output result of the large model node does not meet expectations? Or how to make the large model output in a specific template and format?
Answer: Such problems are mainly due to prompts. You can optimize the prompt, for example, give it an output example in the prompt, so that the large model strictly outputs the result according to the example.
2.Can the model remember the context conversation? How to make the conversation history work in the debugging interface?
Answer: Just check the chat history in the large model node, and you can set the number of conversation rounds.
3.The chat history is checked in the large model node, but it does not take effect in practice, and the input in the running result does not have the chat history field. What is the reason?
Answer: In such cases, it is very likely that the input characters are limited by the model parameters, and setting too many rounds and too much content may cause this situation. Usually, you can increase the maximum response length of the model and reduce the number of conversation rounds a little.
4.What is the difference between the system prompt and the user prompt of the large model?
Answer: The system prompt is a preset global instruction used to define the model's behavior framework, role identity, capability boundaries and output style.
The user prompt generally refers to a specific task proposed, with a clear scenario.
5.Does the model in the large model node support web search?
Answer: No, if you want to perform web search, you can add a tool node and select the [Web Search] tool.
# 2.3 Common Issues of Agent Decision Nodes
1.How does the Agent intelligent decision node call plugins?
Answer:
- First, ensure that the intelligent decision node has added relevant plugins. The decision node can independently think and plan to call the added plugins according to the user's demands.
- Users can also clearly tell the Agent decision node in the prompt under what conditions to call a certain plugin.
2.What to do if the Agent intelligent decision node reports a timeout error during operation?
Answer: The current gateway will disconnect the link and report a timeout error if there is no return within 2 minutes. You can try to use the message node to output the thinking process in a streaming manner. Note that the [Streaming Output] switch needs to be turned on.
3.What to do if the Agent decision node fails (the inference content format returned by the model is incorrect, invalid plugin parameters)?
Answer: Users please check whether the content output by the large model is reasonable and correct (for example, there are abnormal characters in the json content), and confirm whether the Agent decision node has added relevant plugins.
4.What to do if the Agent node execution fails (the inference content format returned by the model is incorrect, invalid inference format, missing necessary identification fields...)?
Answer: It may be due to model effect issues or incorrect prompts. Suggestions: Change to another model or optimize the prompt.
# 2.4 Common Issues of Workflow Nodes
1.Can each node of the workflow add descriptions to facilitate others to understand the user's intention?
Answer: Each node supports adding comments to increase explanations for others' understanding, as shown in the figure below:
2.What to do if an exception is reported at the end node of workflow debugging: "Node validation failed, please check for null values or non-compliance with naming rules"?
Answer: Check the canvas to see if there are missing or incorrect relevant parameters.
3.Why can't the workflow node output or fails to output?
Answer: Usually, the reasons why the workflow node cannot output are: 1. The referenced workflow is complex and the output content is large, resulting in timeout response, and the workflow node does not support streaming output, so the output fails; 2. Check whether the referenced workflow contains Q&A nodes. If there are Q&A nodes, the workflow node does not support output.
4.What to do if an error is reported during workflow import: "Workflow engine node protocol validation failed"?
Answer: An error reported during workflow import may be due to the following reasons: 1. The imported workflow contains someone else's model, and you do not have the model (ModelId) under your account; 2. The imported workflow contains someone else's tool, and you do not have the tool (pluginId) under your account; 3. The imported workflow contains someone else's knowledge base, and you do not have the knowledge base (knowledgeId) under your account; The main reason is the lack of relevant resources that others have under their accounts but you do not have.
5.What to do if the node takes too long, consumes too much time, runs for too long, or runs too slowly? How to improve the running speed?
Answer: The long running time of the node is mainly due to the model. You can appropriately switch to a smaller model, the output speed will be faster, or use a model without a thinking link, or optimize the prompt to reduce unnecessary model thinking links, thereby reducing time consumption. At the same time, it is recommended to add a message node after the node to ensure streaming output, which can reduce waiting time.
# 2.5 Common Issues of Code Nodes
1.What is the problem with the code node reporting an error 500?
Answer: The input and output of main in the code node must correspond to the input and output configurations of the node, which needs to be self-checked.
2.What to do if there is no output result when the code node requests the network? How to request the network?
Answer: The code node does not support network requests. To request the network, you can access it through [Homepage - Resource Management - Create New Plugin].
3.What to do if the code node runs timeout?
Answer: The current code node does not support network requests. Please confirm whether the response timeout is caused by network requests.
# 2.6 Common Issues of Iteration Nodes
1.Why does the workflow run normally without errors, but a certain node does not output normally? For example, the iteration node does not output correctly?
Answer: You can check the running results to see if the input of the node is problematic. Only correct input can lead to relevant output.
2.How to do iterative optimization?
Answer: Create a workflow agent and use the iteration node. The iteration node allows users to set tasks or operations to be executed repeatedly, similar to the for loop in programming languages. The traversal loop is used to traverse a known array and execute a series of identical steps for each element in the array. For each loop iteration, the workflow will execute each node in the canvas in turn.
# 2.7 Common Issues of Q&A Nodes
1.When calling the API for the Q&A node, can a timeout limit be set for the conversation, such as requiring a response within a certain time, otherwise it will become invalid?
Answer: Yes, it can be set in the answer mode settings and conversation timeout settings in the [Q&A Node].
# 2.8 Common Issues of Variable Storage Nodes
1.How to set global variables? How does the variable storage get variables?
Answer: You can use the variable storage node to [Set Variable Value] for long-term data storage. To use the variable, you need to add the variable storage node again and select [Get Variable Value]. For reference: https://www.xfyun.cn/doc/spark/Agent03-%E5%BC%80%E5%8F%91%E6%8C%87%E5%8D%97.html#_3-2-%E5%B7%A5%E4%BD%9C%E6%B5%81%E6%99%BA%E8%83%BD%E4%BD%93%E5%BC%80%E5%8F%91中【变量存储器节点】的使用方法.
# 2.9 Start Node, End Node
1.How to upload files such as pictures, audio, PDF, Word, PPT, Excel in the Agent?
Answer: The start node supports uploading multi-modal files such as pictures, audio, PDF, Word, PPT, Excel, Txt, etc., and you can customize a variable for the file. When the above files are uploaded to the platform, they will be automatically converted into URLs with an unlimited storage validity period. You can parse and process them with the help of relevant plugin tools on the platform. For API calls, refer to the [API Call -- File Upload] description part in the official platform documentation.
2.Can audio files only be placed in the start node? Can audio files be uploaded during the Q&A process?
Answer: Audio files can only be uploaded through the start node, and upload during the Q&A process is not supported.
3.What does streaming output mean?
Answer: Streaming Output: The model does not generate a complete answer at one time, but generates it word by word or sentence by sentence (usually in units of Tokens or word fragments). Each time a small part of the content is generated, it is immediately transmitted to the client (such as your browser or App) through the network, and the client can display or process these partial results in real time. This means that generation and transmission are parallel.
In contrast, there is Non-streaming Output: The model needs to completely generate the entire answer content internally first, then package all the content at once and return it to the client through a complete HTTP response. This means that generation and transmission are serial, and the user needs to wait for the entire process to complete.
# 2.10 Tool Nodes
1.How to solve the tool request failure (error message: The interface return value type does not match the tool configuration, detailed information: Parameter path: $.result, error message: '' is not of type 'array')?
Answer: Check the input of the tool node. The parameter type does not match and needs to be modified.
2.Are there any useful tools?
Answer: The platform's Plugin Marketplace has a wealth of tools to choose from as needed.
3.How to add, create, and host your own tool plugins?
Answer: The Xingchen Agent platform supports hosting custom tools. You can go to [Resource Management] --> [Create New Plugin] on the platform homepage. Note: Only existing custom tools can be hosted, and tools cannot be directly created on the platform. In addition, MCP plugins cannot be hosted in [Create New Plugin] (because MCP and ordinary plugins have different protocols). If you want to host MCP, please go to the MCP hosting platform.
4.How to develop an Agent that only inputs pictures without text?
Answer: The current start node has a default AGENT_USER_INPUT, which must be filled in by default. Pictures can be set as required or not according to the user's scenario.
5.Does the platform support generating relevant file links for download, such as generating Word, PPT, EXCEL, etc., and then supporting download? Does the platform have tool plugins to parse Word, PPT, EXCEL, etc.? How to upload files to cloud storage?
Answer: It supports the output of generating Word, PPT, EXCEL files, and supports defining input of relevant file types in the start node for cloud storage. There are also relevant tool plugins to parse and generate corresponding files and provide download links, which can be selected in the tool node.
6.What to do if an error is reported when creating a new plugin?
Answer: Specific problems need to be analyzed in detail. You can contact the administrator to feedback the problem phenomenon in the communication group for answers.
7.For text-to-speech, how to output voice?
Answer: The platform provides a text-to-speech tool, through which voice output can be performed by adding it in the tool node.
8.How to use the search tool?
Answer: The platform provides Aggregated Search and Web Search tools, which can be used by adding the plugin in the tool node.
9.How to sort out and process the results after aggregated search?
Answer: You can use the large model node and give prompts to process the results.
# 3. Common Issues of Prompt Agents
1.In the agent's prompt editing, what does "Assistant parameter is too long" mean?
Answer: The current maximum number of words supported for prompts is 2000. You can appropriately simplify the number of words in the prompt.
2.Can the prompt agent be published as an API for calling?
Answer: Yes, enter the Release Management, select the corresponding agent and click Release, and it can be published as an API in the pop-up box.
# 4. Common Issues of Plugins and Knowledge Bases
# 4.1 Plugin-related Consultation
1.What to do if the IP address is in the blacklist during the custom plugin interface debugging?
Answer: Check if the IP is an intranet address. Currently, only public network addresses are supported.
2.What is the maximum number of pages that the OCR tool of the iFlytek Agent Platform can read? Why does the response time timeout?
Answer: When calling the tool alone, it is about 45 pages. The timeout is mainly related to too many pages and the complexity of the content on each page.
# 4.2 Knowledge Base-related Consultation
1.How long does it take to call the relevant content in the knowledge base after the knowledge base is successfully built?
Answer: It takes effect immediately.
2.How to collect relevant information in the agent for the knowledge base files uploaded in the personal space?
Answer: Click [My Agents] - [Create New Agent], and you can add a [Knowledge Base Node] in the workflow agent to select the knowledge base file you want to reference for collection and calling.
# 5. Common Issues of Release Management
# 5.1 Management Consultation
1.How to check the Agent ID or botid? Where is it? Query method of botid, Agent ID
Answer: Click the left menu bar, select Release Management, and find the Agent ID, as shown in the figure below.
2.How to check the flowid? Where is the flowid?
Answer: The flowid is the identifier of the workflow agent, which is generally provided to the platform technical personnel for problem troubleshooting. Enter the workflow debugging canvas page, click the symbol next to the agent name in the upper left corner, and the flowid can be found in the pop-up floating box, as shown in the figure below:
3.If there is picture information in the Word, which is imported into the knowledge base, how to make the agent return pictures during knowledge Q&A?
Answer: Currently, there are two methods: 1. Use the Knowledge Base Node, add the Knowledge Base Node, and then connect a Large Model Node after it, and select [Prompt Library] -- [Official] -- [Image Retrieval Q&A] for the prompt. 2. Directly use the Knowledge Base Pro Node, no prompt for picture recall is needed, only fill in the restrictions as needed.
4.What is the maximum waiting time for output on the conversation page after the Agent is published to the platform?
Answer: Currently, the conversation page is set with a 2-minute timeout period. If there is no return frame within 2 minutes, the gateway will be disconnected. Therefore, it is recommended to use a message node after the node with long time consumption in the tool flow and turn on streaming output to ensure return frames, so that the conversation page can continuously output results.
5.After the Agent is published, is the token consumed by the developer or the user?
Answer: Each conversation is the process of running the Agent itself, consuming the developer's token.
# 5.2 Release API Consultation
1.What to do if an error [Authorization Error, Concurrent Requests Exceed Authorized Limit] is reported during API calling?
Answer: First, check whether the appid used in the API call is consistent with the appid bound when the agent is published; if not, they need to be consistent; if they are consistent and the error is still reported, check whether the free resources are sufficient in the Order Management. If not, you need to apply for authorization to increase concurrency, and upgrade the package as needed.
2.How to view the secret key information of the APPID? API Key
Answer: Binding an appid is required when calling the agent through the API, and each appid has its secret key information, namely API Secret and API Key.
3.Why is an error reported that the input parameter does not exist when using the API call after adding an input parameter in the start node?
Answer:
- Please check whether the flowid, apikey and apisecret passed in during the API call are correct.
- Please check whether the modified workflow is updated and published; if not published, the online protocol will still be the old one.
- Please check whether the input parameters in the request protocol are written under the parameters parameter, and whether the format is the same as "{"AGENT_USER_INPUT": question,'xxxx':'yyyy'}".
- If all the above are correct, please contact technical support personnel for assistance.
4.Concept of token
Answer: An important concept of model service usage is token. When calling the model inference service, the input content will be tokenized into tokens that the model can understand. After being processed by the model, tokens are also output and converted into text or other content carriers you need. The number of tokens processed by the model (including input and output) will be used as an important measurement unit for model inference service usage.
For example, a common calculation method for the cost of language large model inference services is as follows:
Cost = Number of tokens used * Unit price of token
- Due to different tokenization strategies adopted by different models, the same text may be converted into different numbers of tokens.
5.How to view the authorization status of the model? Why is the model authorization amount of the appid unchanged after the package is upgraded or renewed? How to check if the model is authorized?
Answer: First, determine the existing binding relationship between the model and the appid or update the binding (the appid can be bound when the agent is published as an API), then the authorized model status of the relevant bound appid can be seen on the platform's Order Management page.
6.What to do if the OpenAPI call times out?
Answer: Call timeout usually means that the response output is too long when calling the large model or tools. It can be solved by adding a message node after the node and turning on streaming output to ensure timely return frames.
# 6. Model Management
1.How to call your own model, how to call the MaaS model, how to select fine-tuned models, how to select more models?
Answer: Currently, the platform supports two methods to add models: 1. It supports calling the fine-tuned and trained models on the MaaS platform. You can select more models in the model node, but they must be under the same mobile phone account and have been successfully published on the MaaS platform. At this time, the model will automatically appear in the model node for selection. 2. [Model Management] on the platform homepage supports adding OpenAI models. After successful addition, the model can be selected in the model node. Note that models on the MaaS platform that do not comply with the OpenAI model protocol are not supported to be created in Model Management.