Hugging Face
Jan supports Hugging Face models through two methods: the new HF Router (recommended) and Inference Endpoints. Both methods require a Hugging Face token and billing to be set up.
Option 1: HF Router (Recommended)
Section titled âOption 1: HF Router (Recommended)âThe HF Router provides access to models from multiple providers (Replicate, Together AI, SambaNova, Fireworks, Cohere, and more) through a single endpoint.
Step 1: Get Your HF Token
Section titled âStep 1: Get Your HF TokenâVisit Hugging Face Settings > Access Tokens and create a token. Make sure you have billing set up on your account.
Step 2: Configure Jan
Section titled âStep 2: Configure Janâ- Go to Settings > Model Providers > HuggingFace
- Enter your HF token
- Use this URL:
https://router.huggingface.co/v1
You can find out more about the HF Router here.
Step 3: Start Using Models
Section titled âStep 3: Start Using ModelsâJan comes with three HF Router models pre-configured. Select one and start chatting immediately.
Option 2: HF Inference Endpoints
Section titled âOption 2: HF Inference EndpointsâFor more control over specific models and deployment configurations, you can use Hugging Face Inference Endpoints.
Step 1: Navigate to the HuggingFace Model Hub
Section titled âStep 1: Navigate to the HuggingFace Model HubâVisit the Hugging Face Model Hub (make sure you are logged in) and pick the model you want to use.
Step 2: Configure HF Inference Endpoint and Deploy
Section titled âStep 2: Configure HF Inference Endpoint and DeployâAfter you have selected the model you want to use, click on the Deploy button and select a deployment method. We will select HF Inference Endpoints for this one.
This will take you to the deployment set up page. For this example, we will leave the default settings as they are under the GPU tab and click on Create Endpoint.
Once your endpoint is ready, test that it works on the Test your endpoint tab.
If you get a response, you can click on Copy to copy the endpoint URL and API key.
Step 3: Configure Jan
Section titled âStep 3: Configure JanâIf you do not have an API key you can create one under Settings > Access Tokens here. Once you finish, copy the token and add it to Jan alongside your endpoint URL at Settings > Model Providers > HuggingFace.
3.1 HF Token
3.2 HF Endpoint URL
3.3 Jan Settings
3.4 Add Model Details
Step 4: Start Using the Model
Section titled âStep 4: Start Using the ModelâNow you can start using the model in any chat.
If you want to learn how to use Jan Nano with MCP, check out the guide here.
Available Hugging Face Models
Section titled âAvailable Hugging Face ModelsâOption 1 (HF Router): Access to models from multiple providers as shown in the providers image above.
Option 2 (Inference Endpoints): You can follow the steps above with a large amount of models on Hugging Face and bring them to Jan. Check out other models in the Hugging Face Model Hub.
Troubleshooting
Section titled âTroubleshootingâCommon issues and solutions:
1. Started a chat but the model is not responding
- Verify your API_KEY/HF_TOKEN is correct and not expired
- Ensure you have billing set up on your HF account
- For Inference Endpoints: Ensure the model youâre trying to use is running again since, after a while, they go idle so that you donât get charged when you are not using it
2. Connection Problems
- Check your internet connection
- Verify Hugging Faceâs system status
- Look for error messages in Janâs logs
3. Model Unavailable
- Confirm your API key has access to the model
- Check if youâre using the correct model ID
- Verify your Hugging Face account has the necessary permissions
Need more help? Join our Discord community or check the Hugging Faceâs documentation.