NetMind Power Documentation
  • NetMind Account
  • Inference
    • Model APIs
    • Dedicated Endpoints
  • Fine-tuning
  • Rent GPUs
    • Cloud Sync
    • Use Ngrok as Ingress Service
  • Rent Cluster (Comming soon)
  • API
    • API token
    • Files
    • Fine-tuning
      • List Models
      • Preparing your dataset
      • Create Job
      • Retrieve job
      • Download model
      • Cancel job
      • Deploy Checkpoint (coming soon)
    • Inference
      • Chat
      • Images
      • Haiper Inference
      • Asynchronous Inference
      • Dedicated Endpoints
      • Batch Processing
      • Embedding API
      • Deprecated Models
    • Rent GPU
      • SSH Authentication
      • List Available images
      • List Available GPU Instances
      • Create Your First Environment
      • Stop GPU instace
    • API Reference
      • Files
      • Fine-tuning
      • Rent GPU
Powered by GitBook
On this page

Was this helpful?

  1. Inference

Model APIs

PreviousInferenceNextDedicated Endpoints

Last updated 2 months ago

Was this helpful?

NetMind supports online inference for many popular models, as well as API calls for easy integration.

  1. Click the "Model Library" tab on console page.

  1. Jump to the "Model Library" page to view and select an inference model. Clicking on a model takes you to its details page

  1. In the model details page, you can find information on model pricing, usage methods, and parameters in the API tab. The Playground tab also provides a quick, free way to test the model.

If the model you need isn’t available, you can email us () with the model details. We will evaluate the demand for it to decide whether to publish it on the platform.

support@netmind.ai
Model Library
Model Detail - API
Model Detail - Playground