Images

Overview

Netmind provides compatibility with the OpenAI API standard, allowing for easier integration into existing applications.

Base URL

https://api.netmind.ai/inference-api/openai/v1

API Key

To use the API, you need to obtain a Netmind AI API Key. For detailed instructions, please refer to the authentication documentation.

Supported Models

Image Generation

  • runwayml/stable-diffusion-v1-5

  • black-forest-labs/FLUX.1-schnell

  • stabilityai/stable-diffusion-3.5-large

Image Edit

  • black-forest-labs/FLUX.1-Depth-dev

  • black-forest-labs/FLUX.1-Canny-dev

  • black-forest-labs/FLUX.1-Fill-dev

Image Variation

  • black-forest-labs/FLUX.1-Redux-dev

Usage Examples

The Image API is compatible with the OpenAI Python SDK.Below is an example of how to use it.

Image Generation

Python Client

from openai import OpenAI
import base64
client = OpenAI(
    base_url="https://api.netmind.ai/inference-api/openai/v1",
    api_key="<YOUR API Key>",
)
response = client.images.generate(
    model="black-forest-labs/FLUX.1-schnell",
    prompt="Generate a cup of coffee",
    response_format="b64_json"
)
image_base64 = response.data[0].b64_json
with open("generated_image.png", "wb") as f:
    f.write(base64.b64decode(image_base64))

CURL Example

# Set your API key
export API_KEY="<YOUR API Key>"
curl -X POST "https://api.netmind.ai/inference-api/openai/v1/images/generations" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${API_KEY}" \
  -d '{
    "model": "stabilityai/stable-diffusion-3.5-large",
    "prompt": "Generate a cup of coffee",
    "response_format": "b64_json"
  }' -o response.json
  cat response.json| jq -r ".data[0].b64_json" | base64 --decode > generated_image.png

Image Edit

The black-forest-labs/FLUX.1-Depth-dev and black-forest-labs/FLUX.1-Canny-dev models do not require a mask file for image editing, whereas the black-forest-labs/FLUX.1-Fill-dev model necessitates a mask file to perform image modifications.

Python Client

from openai import OpenAI
import base64

client = OpenAI(
    base_url="https://api.netmind.ai/inference-api/openai/v1",
    api_key="<YOUR API Key>",
)

response = client.images.edit(
    model="black-forest-labs/FLUX.1-Fill-dev",
    image=open("image.png", "rb"),
    mask=open("mask.png", "rb"),
    prompt="Turn hair blue",
    response_format="b64_json",
)

image_base64 = response.data[0].b64_json
with open("generated_image.png", "wb") as f:
    f.write(base64.b64decode(image_base64))

CURL Example

# Set your API key
export API_KEY="<YOUR API Key>"
curl -X POST "https://api.netmind.ai/inference-api/openai/v1/images/edits" \
  -H "Authorization: Bearer ${API_KEY}" \
  --form 'model="black-forest-labs/FLUX.1-Fill-dev"' \
  --form 'prompt="Turn hair blue"' \
  --form 'response_format="b64_json"' \
  --form 'image=@"image.png"' \
  --form 'mask=@"mask.png"' -o response.json
 cat response.json| jq -r ".data[0].b64_json" | base64 --decode > generated_image.png

Image Variation

Python Client

from openai import OpenAI
import base64

client = OpenAI(
    base_url="https://api.netmind.ai/inference-api/openai/v1",
    api_key="<YOUR API Key>",
)

response = client.images.create_variation(
    model="black-forest-labs/FLUX.1-Redux-dev",
    image=open("image.png", "rb"),
    response_format="b64_json",
)

image_base64 = response.data[0].b64_json
with open("generated_image.png", "wb") as f:
    f.write(base64.b64decode(image_base64))

CURL Example

# Set your API key
export API_KEY="<YOUR API Key>"
curl -X POST "https://api.netmind.ai/inference-api/openai/v1/images/variations" \
  -H "Authorization: Bearer ${API_KEY}" \
  --form 'model="black-forest-labs/FLUX.1-Redux-dev"' \
  --form 'response_format="b64_json"' \
  --form 'image=@"image.png"' -o response.json
cat response.json| jq -r ".data[0].b64_json" | base64 --decode > generated_image.png

Model Parameters

Image Generation

  • model: ID of the model to use. (required)

  • prompt: A text description of the desired image(s). (required)

  • size: Image size, format is width x height. (optional)

  • response_format: The format in which the generated images are returned. Currently supports two options: b64_json and url. If the url format is selected, please note that the URL will expire 24 hours after generation. Make sure to download the file before the expiration time. (optional)

Image Edit

  • model: ID of the model to use. (required)

  • prompt: The prompt to use for image edit. (required)

  • response_format: The format in which the generated images are returned. Currently supports two options: b64_json and url. If the url format is selected, please note that the URL will expire 24 hours after generation. Make sure to download the file before the expiration time. (optional)

  • image: The image to use for image edit. (required)

  • mask: The mask to use for image edit. (optional)

Image Variation

  • model: ID of the model to use. (required)

  • image: The image to use for image variation. (required)

  • response_format: The format in which the generated images are returned. Currently supports two options: b64_json and url. If the url format is selected, please note that the URL will expire 24 hours after generation. Make sure to download the file before the expiration time. (optional)

Last updated

Was this helpful?