Self-Hosting LLM with Ollama
Currently, self-hosting LLM is a preview feature available with the Pro plan. If you wish to use it, please upgrade to the Pro plan from the dashboard and apply for the waitlist.
Morph offers a self-hosting feature for LLM provided by Ollama. By selecting the model you want to use on the dashboard and deploying it, a URL that can be called as an API will be issued, allowing you to call it from Python code.
Using LLM
Deploying LLM
Click the LLM tab on the dashboard to navigate to the LLM creation screen.
Enter a desired LLM Name
and select the Model Name
you want to use.
The following models are currently available:
Model | Parameters |
---|---|
deepseek-r1 | 1.5b, 7b, 8b, 14b |
llama3.2 | 8b |
phi4 | 14b |
qwen | 0.5b, 1.8b, 4b, 7b, 14b |
Confirming the Created Model
Select the created LLM from the LLM tab.
If the status of the selected model in the Logs section is Deployment Succeeded
, the creation was successful. If it is still in progress, please wait until the creation is complete.
The App URL is the URL of the LLM hosted on Morph. You can send requests to the LLM using this URL and the Morph API Key
.
Sending Requests to LLM
Use the App URL and Morph API Key to send requests to the LLM.
Below are sample requests using Python and cURL.
You need to install the langchain-ollama
package to use in Python.