NetMind Power Documentation
  • NetMind Account
  • Inference
    • Model APIs
    • Dedicated Endpoints
  • Fine-tuning
  • Rent GPUs
    • Cloud Sync
    • Use Ngrok as Ingress Service
  • Rent Cluster (Comming soon)
  • API
    • API token
    • Files
    • Fine-tuning
      • List Models
      • Preparing your dataset
      • Create Job
      • Retrieve job
      • Download model
      • Cancel job
      • Deploy Checkpoint (coming soon)
    • Inference
      • Chat
      • Images
      • Haiper Inference
      • Asynchronous Inference
      • Dedicated Endpoints
      • Batch Processing
      • Embedding API
      • Deprecated Models
    • Rent GPU
      • SSH Authentication
      • List Available images
      • List Available GPU Instances
      • Create Your First Environment
      • Stop GPU instace
    • API Reference
      • Files
      • Fine-tuning
      • Rent GPU
Powered by GitBook
On this page

Was this helpful?

  1. API
  2. Rent GPU

List Available images

Next, you’ll choose an image template that provides the required environment for your GPU instance, such as "PyTorch" or "TensorFlow". Selecting an appropriate image template ensures the instance is configured with the software and dependencies needed for your workload.

If none of the platform-provided image templates meet your requirements, you can create a custom image template. You can also implement these in the custom template:

  • Map a service on a specific IP to the internet.

  • Run VSCode or Jupyter in a web-based interface.

Please refer to the "Full API Reference"-"Rent GPU"-"POST /v1/rentgpu/images" for details on how to create and manage custom image templates.

Example Request:

Replace {{API_TOKEN}} with your actual token.

curl --location 'https://api.netmind.ai/v1/rentgpu/images?limit=10&marker=12' \
--header 'Authorization: Bearer {{API_TOKEN}}'
import requests

url = "https://api.netmind.ai/v1/rentgpu/images?limit=10&marker=12"

payload = {}
headers = {
  'Authorization': 'Bearer {{API_TOKEN}}'
}

response = requests.request("GET", url, headers=headers, data=payload)

print(response.text)

Example Response:

[
    {
        "name": "default",
        "description": "pytoch/pytorch:latest",
        "image_name": "pytoch/pytorch",
        "image_tag": "latest",
        "image_label": "default",
        "docker_username": null,
        "docker_passwd": null,
        "docker_registry": null,
        "public_port": null,
        "onstart": null,
        "options": null,
        "image_type": 0,
        "id": 0
    },
    ...
]

PreviousSSH AuthenticationNextList Available GPU Instances

Last updated 5 months ago

Was this helpful?