Download OpenAPI specification:
Run the image classification model on an uploaded image and return the top predicted labels with their probabilities.
This endpoint accepts a multipart/form-data POST with a file field named 'image'.
| image required | string <binary> (Image) |
{- "filename": "guacamole.jpeg",
- "predictions": {
- "ceviche": 0.001,
- "chicken_curry": 0,
- "guacamole": 0.997,
- "nachos": 0,
- "tuna_tartare": 0
}
}Creates an attention-based visualization showing which parts of the image the model focuses on for prediction.
| image required | string <binary> (Image) |
{- "attention_map": "iVBORw0KGgoAAAANS...",
- "confidence": 0.9968,
- "filename": "food.jpg",
- "grid_size": "13x15",
- "num_heads": 12,
- "num_layers": 12,
- "predicted_class": "guacamole"
}Generate text with the configured Ollama model.
POST a JSON body with prompt, optional temperature and max_tokens.
Returns the generated text and the model used.
| prompt required | string (Prompt) |
| temperature | number (Temperature) [ 0 .. 2 ] Default: 0.7 |
| max_tokens | integer (Max Tokens) [ 1 .. 4096 ] Default: 500 |
{- "max_tokens": 200,
- "prompt": "Write a haiku about masala dosa.",
- "temperature": 0.7
}{- "done": true,
- "model": "gemma3:270m",
- "response": "Golden crisp dosa,\nSpiced potato dreams within,\nCurry-scented dawn."
}Check whether the local Ollama service is reachable and return a list of available models plus the configured model name.
Returns a JSON object with status, ollama_url, available_models and configured_model.
{- "available_models": [
- "gemma3:270m",
- "llama3.1:8b"
], - "configured_model": "gemma3:270m",
- "status": "healthy"
}