Self Hosted Skyvern Show Error Creating Workflow Run From Prompt

by ADMIN 65 views

===========================================================

Introduction

Self-hosted Skyvern is a powerful tool for building and deploying AI-powered workflows. However, when deploying Skyvern in a Docker environment and using an OpenAI-compatible LLM model, users may encounter errors when trying to create workflow runs from prompts. In this article, we will explore the causes of this error and provide a step-by-step guide to resolving it.

Understanding the Error

The error "Error creating workflow run from prompt" is often accompanied by a 500 status code and a message indicating that the LLM request failed unexpectedly. This error can be caused by a variety of factors, including misconfigured environment variables, incorrect LLM model settings, or issues with the LLM provider.

Analyzing the YML File

The provided YML file contains environment variables that are used to configure the Skyvern deployment. However, upon closer inspection, it appears that the LLM_KEY variable is set to OPENAI_COMPATIBLE, which is not a valid LLM key. This could be the cause of the error.

# - ./alembic:/app/alembic
environment:
  - DATABASE_STRING=postgresql+psycopg://skyvern:skyvern@postgres:5432/skyvern
  - BROWSER_TYPE=chromium-headful
  - ENABLE_OPENAI_COMPATIBLE=true
  - OPENAI_COMPATIBLE_API_BASE=http://xxxxxxx:4000/v1
  - OPENAI_COMPATIBLE_API_KEY=xxxxxx
  - OPENAI_COMPATIBLE_MODEL_NAME=openai/deepseek-ai/DeepSeek-V3
  - LLM_KEY=OPENAI_COMPATIBLE

Examining the Docker Logs

The Docker logs provide valuable information about the error, including the exact error message and the stack trace. Upon analyzing the logs, it appears that the error is caused by a litellm.exceptions.BadRequestError exception, which indicates that the LLM provider was not provided.

2025-04-30T10:55:28.481914Z [info     ] Using general model configuration for unknown LLM key llm_key=OPENAI_COMPATIBLE
2025-04-30T10:55:28.482038Z [info     ] Using general model configuration for unknown LLM key llm_key=OPENAI_COMPATIBLE
2025-04-30T10:55:28.517585Z [info     ] Using general model configuration for unknown LLM key llm_key=OPENAI_COMPATIBLE
2025-04-30T10:55:28.517747Z [info     ] Using general model configuration for unknown LLM key llm_key=OPENAI_COMPATIBLE

Provider List: https://docs.litellm.ai/docs/providers

2025-04-30T10:56:52.488497Z [error    ] LLM request failed unexpectedly llm_key=OPENAI_COMPATIBLE
Traceback (most recent call last):
  File "/app/skyvern/forge/sdk/api/llm/api_handler_factory.py", line 312, in llm_api_handler
    response = await litellm.acompletion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 1452, in wrapper_async
    e
  File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 1313, in wrapper_async
    result = await original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 446, in acompletion
    _, custom_llm_provider, _, _ = get_llm_provider(
                                   ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 358, in get_llm_provider
    raise e
  File "/usr/local/lib/python3.11/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 335, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=OPENAI_COMPATIBLE
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
2025-04-30T10:56:52.489766Z [error    ] LLM failure to initialize task v2
Traceback (most recent call last):
  File "/app/skyvern/forge/sdk/api/llm/api_handler_factory.py", line 312, in llm_api_handler
    response = await litellm.acompletion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 1452, in wrapper_async
    raise e
  File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 1313, in wrapper_async
    result = await original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 446, in acompletion
    _, custom_llm_provider, _, _ = get_llm_provider(
                                   ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 358, in get_llm_provider
    raise e
  File "/usr/local/lib/python3.11/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 335, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=OPENAI_COMPATIBLE
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/skyvern/forge/sdk/agent_protocol.py", line 1341, in run_task_v2
    task_v2 = await task_v2_service.initialize_task_v2(
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/skyvern/services/task_v2_service.py", line 125, in initialize_task_v2
    metadata_response = await app.LLM_API_HANDLER(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/skyvern/forge/sdk/api/llm/api_handler_factory.py", line 338, in llm_api_handler
    raise LLMProviderError(llm_key) from e
skyvern.forge.sdk.api.llm.exceptions.LLMProviderError: Error while using LLMProvider OPENAI_COMPATIBLE

Resolving the Error

To resolve the error, you need to update the LLM_KEY variable in the YML file to a valid LLM key. You can do this by replacing OPENAI_COMPATIBLE with the actual LLM key you want to use.

# - ./alembic:/app/alembic
environment:
  - DATABASE_STRING=postgresql+psycopg://skyvern:skyvern@postgres:5432/skyvern
  - BROWSER_TYPE=chromium-headful
  - ENABLE_OPENAI_COMPATIBLE=true
  - OPENAI_COMPATIBLE_API_BASE=http://xxxxxxx:4000/v1
  - OPENAI_COMPATIBLE_API_KEY=xxxxxx
  - OPENAI_COMPATIBLE_MODEL_NAME=openai/deepseek-ai/DeepSeek-V3
  - LLM_KEY=huggingface/starcoder

Additionally, you may need to update the OPENAI_COMPATIBLE_API_BASE and OPENAI_COMPATIBLE_API_KEY variables to match the actual API base and key for the LLM provider you are using.

Conclusion

In this article, we explored the causes of the "Error creating workflow run from prompt" error in self-hosted Skyvern and provided a step-by-step guide to resolving it. By updating the LLM_KEY variable in the YML file and updating the OPENAI_COMPATIBLE_API_BASE and OPENAI_COMPATIBLE_API_KEY variables, you should be able to resolve the error and successfully create workflow runs from prompts.

===========================================================

Introduction

In our previous article, we explored the causes of the "Error creating workflow run from prompt" error in self-hosted Skyvern and provided a step-by-step guide to resolving it. However, we understand that some users may still have questions about this error and how to resolve it. In this article, we will provide a Q&A section to address some of the most frequently asked questions about this error.

Q: What is the cause of the "Error creating workflow run from prompt" error in self-hosted Skyvern?

A: The "Error creating workflow run from prompt" error in self-hosted Skyvern is caused by a variety of factors, including misconfigured environment variables, incorrect LLM model settings, or issues with the LLM provider.

Q: How do I update the LLM_KEY variable in the YML file?

A: To update the LLM_KEY variable in the YML file, you need to replace the current value with the actual LLM key you want to use. For example, if you want to use the Hugging Face LLM model, you would update the LLM_KEY variable to huggingface/starcoder.

Q: What are the correct values for the OPENAI_COMPATIBLE_API_BASE and OPENAI_COMPATIBLE_API_KEY variables?

A: The correct values for the OPENAI_COMPATIBLE_API_BASE and OPENAI_COMPATIBLE_API_KEY variables depend on the actual API base and key for the LLM provider you are using. You can find this information in the documentation for the LLM provider you are using.

Q: How do I troubleshoot the "Error creating workflow run from prompt" error in self-hosted Skyvern?

A: To troubleshoot the "Error creating workflow run from prompt" error in self-hosted Skyvern, you can check the Docker logs for any error messages. You can also try updating the LLM_KEY variable in the YML file and updating the OPENAI_COMPATIBLE_API_BASE and OPENAI_COMPATIBLE_API_KEY variables to match the actual API base and key for the LLM provider you are using.

Q: Can I use a different LLM model in self-hosted Skyvern?

A: Yes, you can use a different LLM model in self-hosted Skyvern. To do this, you need to update the LLM_KEY variable in the YML file to the actual LLM key for the model you want to use.

Q: How do I update the LLM model settings in self-hosted Skyvern?

A: To update the LLM model settings in self-hosted Skyvern, you need to update the OPENAI_COMPATIBLE_MODEL_NAME variable in the YML file to the actual model name for the LLM model you want to use.

Q: Can I use a custom LLM model in self-hosted Skyvern?

A: Yes, you can use a custom LLM model in self-hosted Skyvern. To do this, you need to update the LLM_KEY variable in the YML file to the actual LLM key for the custom model you want to use.

Q: How do I troubleshoot issues with the LLM provider in self-hosted Skyvern?

A: To troubleshoot issues with the LLM provider in self-hosted Skyvern, you can check the Docker logs for any error messages. You can also try updating the OPENAI_COMPATIBLE_API_BASE and OPENAI_COMPATIBLE_API_KEY variables to match the actual API base and key for the LLM provider you are using.

Conclusion

In this article, we provided a Q&A section to address some of the most frequently asked questions about the "Error creating workflow run from prompt" error in self-hosted Skyvern. We hope that this article has been helpful in resolving any issues you may have had with this error. If you have any further questions or concerns, please don't hesitate to contact us.