Join our Discord server

Discord.Rocks API Documentation

Want a free proxy? https://scraper.api.airforce

Total requests: 11276885

Total models: 107

Playground: https://llmplayground.net

Our maps: https://api.airforce/maps

Endpoints

There are six main endpoints:

OpenAI Compatible

openai.base_url = 'https://api.airforce'

LLM Models

['openchat-3.5-0106', 'deepseek-coder-6.7b-base', 'deepseek-coder-6.7b-instruct', 'deepseek-math-7b-instruct', 'Nous-Hermes-2-Mixtral-8x7B-DPO', 'hermes-2-pro-mistral-7b', 'openhermes-2.5-mistral-7b', 'lfm-40b-moe', 'discolm-german-7b-v1', 'falcon-7b-instruct', 'llama-2-7b-chat-int8', 'llama-2-7b-chat-fp16', 'neural-chat-7b-v3-1', 'phi-2', 'sqlcoder-7b-2', 'tinyllama-1.1b-chat', 'zephyr-7b-beta', 'any-uncensored', 'llama-3.1-70b-chat', 'llama-3.1-8b-chat', 'llama-3.1-70b-turbo', 'llama-3.1-8b-turbo', 'gpt-4o-mini', 'gpt-4o', 'gpt-4-turbo']

API Key Usage

You can get a premium api key. Include the API key in the Authorization header as follows:

Authorization: Bearer YOUR_API_KEY

You can check the usage of your key via the following:

/check?key=<KEY>

Get api key at https://discord.gg/pMeMK4FwXB

Chat Completions Endpoint

The /v1/chat/completions and /chat/completions endpoints are used to send a prompt to the specified LLM model and receive a response.

Usage

Send a POST request to /v1/chat/completions or /chat/completions with the following JSON payload:

{
    'model': 'claude-3-opus',
    'messages': [
        {'role': 'system', 'content': 'System prompt (only the first message, once)'},
        {'role': 'user', 'content': 'Message content'},
        {'role': 'assistant', 'content': 'Assistant response'}
    ],
    'max_tokens': 2048,
    'stream': false,
    'temperature': 0.7,
    'top_p': 0.5,
    'top_k': 0
}

Response Format

The response will be in the following format:

{
    'id': 'chatcmpl-123',
    'object': 'chat.completion',
    'created': 1677652288,
    'model': 'claude-3-opus',
    'system_fingerprint': 'fp_44709d6fcb',
    'choices': [{
        'index': 0,
        'message': {
            'role': 'assistant',
            'content': 'Response content'
        },
        'logprobs': null,
        'finish_reason': 'stop'
    }],
    'usage': {
        'prompt_tokens': 9,
        'completion_tokens': 12,
        'total_tokens': 21
    }
}

Models Endpoint

The /v1/models and /models endpoints are used to list all available models.

Usage

Send a GET request to /v1/models or /models to retrieve a list of available models.

Response Format

The response will be in the following format:

{
    'object': 'list',
    'data': [
        {
            'id': 'model-id',
            'object': 'model',
            'created': 1686935002,
            'owned_by': 'provider'
        },
        ...
    ]
}

Imagine Endpoint

The /v1/imagine and /imagine endpoints are used to generate images.

List all models: https://api.airforce/imagine/models Playground: https://flux.api.airforce

Usage

Send a GET request to /v1/imagine or /imagine with the following query parameters:

{
    'prompt': 'A beautiful landscape',
    'size': '1:1' [one of 1:1, 16:9, 9:16, 21:9, 9:21, 1:2 or 2:1]
    'seed': 123456,
    'model': 'flux' [one of flux, flux-realism, flux-4o, flux-pixel, flux-3d, flux-anime, flux-disney, any-dark, stable-diffusion-xl-lightning, stable-diffusion-xl-base],
}

Response Format

The response will be an image in PNG format.

The response will be in the following format:

Content-Type: image/png
(binary image data)

Example

Example GET request:

GET /v1/imagine?prompt=A+beautiful+landscape&size=1:1&seed=123456&model=flux-realism

Imagine2 Endpoint

The /v1/imagine2 and /imagine2 endpoints are used to generate images fast and stable.

Playground: https://flux.api.airforce

This is a duplicated of Imagine.

GlobalAsk Endpoint

The /v1/ask and /ask endpoints are using the GlobalAsk api which performs a request to chatgpt with web search and image generation.

Usage

Send a GET request to /v1/ask or /ask with the following query parameter:

{
    'prompt': 'What is ChatGPT'
}

Response Format

The response will be a text/event-stream consisting of multiple json chunks.

Every line is sent seperatly and contains a json code in the following format:

{
    'message': STRING,
    'url': STRING
}

GlobalAsk Playground

You can try the GlobalAsk API here.

Vision Endpoint

The /v1/images/vision and /images/vision endpoints are used to describe an image using specified vision models.

Usage

Send a GET request to /v1/images/vision or /images/vision with the following query parameters:

{
    'url': 'URL of the image to be described',
    'model': 'gemini-pro-vision' // optional, default is 'gemini-pro-vision'
}

Valid models: 'gpt-4o', 'gpt-4o-mini', 'gemini-pro-vision'

If no model is specified, the default model 'gemini-pro-vision' will be used.

Response Format

The response will a string

Example

Example GET request:

GET /v1/images/vision?url=https://example.com/image.png&model=gemini-pro-vision

Suno Endpoint

The /v1/suno and /suno endpoints are used to generate a song using Suno's AI models.

Try here: https://api.airforce/suno

Usage

There are two ways to interact with this endpoint:

POST Request

To generate a song, send a POST request to /v1/suno or /suno with the following JSON payload:

{
            "model": "chirp-v3-5", // Optional, can be one of "chirp-v2-xxl-alpha", "chirp-v3-5", "chirp-v3-0". Default is "chirp-v3-5".
            "prompt": "Describe the type of song you want to generate.", // Required, between 1 and 200 characters.
            "instrumental": "false", // Optional, if set to "true", generates an instrumental version.
            "style": "genre or style", // Optional, between 1 and 120 characters. Only required for custom generation.
            "custom": "false" // Optional, if set to "true", allows for longer and more detailed prompts (up to 1250 characters for most models, up to 3000 characters for "chirp-v3-5").
        }

GET Request

To retrieve the status and URLs for an existing song, send a GET request to /v1/suno/<uuid> or /suno/<uuid> with the song's UUID.

Response Format

For POST Request:

Successful response will include:

{
            "status": "success",
            "clips": ["uuid1", "uuid2"]
        }

Each UUID can be used to retrieve the song's content via a subsequent GET request.

For GET Request:

The response will include:

{
            "status": "generating" | "finished",
            "image_url": "https://link_to_image",
            "audio_url": "https://link_to_audio",
            "video_url": "https://link_to_video"
        }

Example POST Request

POST /v1/suno
        Content-Type: application/json
        
        {
            "model": "chirp-v3-5",
            "prompt": "A catchy pop song with upbeat rhythm",
            "instrumental": "false",
            "custom": "false"
        }

Example GET Request

GET /v1/suno/uuid1

Get Audio Endpoint

The /get-audio endpoint is used to convert text to speech and return the audio as an MP3 file. The text is provided via a URL query parameter.

Usage

Send a GET request to /get-audio with the following query parameter:

GET /get-audio?text=Your+text+here&voice=alex

For example, to convert "Hello world" to speech:

GET /get-audio?text=Hello+world&voice=sophia

Available voices are alex and sophia - more soon.

Response Format

The response will be an MP3 audio file with the following format:

Content-Type: audio/mpeg
(binary audio data)

Example

Example GET request:

GET /get-audio?text=Hello+world&voice=alex

Successful request will return an MP3 file containing the spoken text.