TikkaMasalAI Backend (0.1.0)

Download OpenAPI specification:

Metrics

Endpoint that serves Prometheus metrics.

Responses

Response samples

Content type
application/json
null

Metrics Dashboard

Enhanced visual dashboard for Prometheus metrics with charts. Includes Chart.js for visualizations.

Responses

Read Root

Responses

Response samples

Content type
application/json
{
  • "status": "ok"
}

Health Check

Lightweight health endpoint for container orchestration.

Returns 200 OK when the app is up and routers are registered.

Responses

Response samples

Content type
application/json
{
  • "status": "ok"
}

prediction

Predict Food Class

Run the image classification model on an uploaded image and return the top predicted labels with their probabilities.

This endpoint accepts a multipart/form-data POST with a file field named 'image'.

Request Body schema: multipart/form-data
required
image
required
string <binary> (Image)

Responses

Response samples

Content type
application/json
{
  • "filename": "guacamole.jpeg",
  • "predictions": {
    }
}

Generate Attention Heatmap

Creates an attention-based visualization showing which parts of the image the model focuses on for prediction.

Request Body schema: multipart/form-data
required
image
required
string <binary> (Image)

Responses

Response samples

Content type
application/json
{
  • "attention_map": "iVBORw0KGgoAAAANS...",
  • "confidence": 0.9968,
  • "filename": "food.jpg",
  • "grid_size": "13x15",
  • "num_heads": 12,
  • "num_layers": 12,
  • "predicted_class": "guacamole"
}

llm

Generate text from prompt

Generate text with the configured Ollama model.

POST a JSON body with prompt, optional temperature and max_tokens. Returns the generated text and the model used.

Request Body schema: application/json
required
prompt
required
string (Prompt)
temperature
number (Temperature) [ 0 .. 2 ]
Default: 0.7
max_tokens
integer (Max Tokens) [ 1 .. 4096 ]
Default: 500

Responses

Request samples

Content type
application/json
{
  • "max_tokens": 200,
  • "prompt": "Write a haiku about masala dosa.",
  • "temperature": 0.7
}

Response samples

Content type
application/json
{
  • "done": true,
  • "model": "gemma3:270m",
  • "response": "Golden crisp dosa,\nSpiced potato dreams within,\nCurry-scented dawn."
}

Ollama service health

Check whether the local Ollama service is reachable and return a list of available models plus the configured model name.

Returns a JSON object with status, ollama_url, available_models and configured_model.

Responses

Response samples

Content type
application/json
{
  • "available_models": [
    ],
  • "configured_model": "gemma3:270m",
  • "ollama_url": "http://localhost:11434",
  • "status": "healthy"
}