Products

SIGN UPLOG IN

Image Moderation / Bulk

Bulk Image Moderation

Table of contents

Overview

What is Bulk Image Moderation?

Bulk Image Moderation means that you submit multiple images at the same time (i.e. within a single request) to our Image Moderation API.

Why?

The easiest way to use the Image Moderation API is by submitting images individually. You perform a new API request each time you need to moderate an image.

In some cases, it might be more efficient to submit multiple images at the same time. For example if you have a long queue of images to review and want to minimize API calls. In such cases, bulk submission of images might be useful.

Limitations

You can submit up to 6 images within a single request, with a maximum total size of 24 megabytes.

Code examples

Getting started

If you haven't already, create an account to get your own API keys.

Submit images

Let's say you want to moderate 5 images within a single request and will be using the following three models: nudity, wad and offensive. You can do so by sending the raw bytes of the 5 images like this:


curl -X POST 'https://api.sightengine.com/1.0/check.json' \
    -F 'media[]=@./image1.jpg' \
    -F 'media[]=@./image2.jpg' \
    -F 'media[]=@./image3.jpg' \
    -F 'media[]=@./image4.jpg' \
    -F 'media[]=@./image5.jpg' \
    -F 'models=nudity,weapon,offensive' \
    -F 'api_user={api_user}' \
    -F 'api_secret={api_secret}'


# this example uses requests
import requests
import json

data = {
  # specify the models you want to apply
  'models': 'nudity,weapon,offensive',
  'api_user': '{api_user}',
  'api_secret': '{api_secret}'
}
images = {
  ('media[]', open('./image1.jpg', 'rb')),
  ('media[]', open('./image2.jpg', 'rb')),
  ('media[]', open('./image3.jpg', 'rb')),
  ('media[]', open('./image4.jpg', 'rb')),
  ('media[]', open('./image5.jpg', 'rb'))
}
r = requests.post('https://api.sightengine.com/1.0/check.json', files=images, data=data)

output = json.loads(r.text)

The API will then return a JSON response. The response for the different images will be encapsulated in the data field. The data field is an array, and each element of the array will be a JSON object with the moderation result for one of the submitted images. As the order is not guaranteed, we strongly recommend that you use the media field within each object to match each image with its moderation results.

                    
                    
{
    "status": "success",
    "request": {
        "id": "req_56JCYHWOIpuU8dpmxeAKw",
        "timestamp": 1513479454.157561,
        "operations": 15
    },
    "data": [
        {
            "weapon": 0.005,
            "alcohol": 0.008,
            "drugs": 0.01,
            "nudity": {
                "raw": 0.01,
                "safe": 0.98,
                "partial": 0.01
            },
            "offensive": {
                "prob": 0.01
            },
            "media": {
                "id": "med_56JC6VwhaI1RUOmU20eYx",
                "uri": "image1.jpg"
            }
        },
        {
            "weapon": 0.009,
            "alcohol": 0.008,
            "drugs": 0.01,
            "nudity": {
                "raw": 0.01,
                "safe": 0.98,
                "partial": 0.01
            },
            "offensive": {
                "prob": 0.01
            },
            "media": {
                "id": "med_56JCjmfllgusEZaDQ5lRH",
                "uri": "image2.jpg"
            }
        },
        {
            "weapon": 0.009,
            "alcohol": 0.01,
            "drugs": 0.01,
            "nudity": {
                "raw": 0.042,
                "safe": 0.948,
                "partial": 0.01
            },
            "offensive": {
                "prob": 0.01
            },
            "media": {
                "id": "med_56JCMTohYeH8JntUh8KV2",
                "uri": "image3.jpg"
            }
        },
        {
            "weapon": 0.006,
            "alcohol": 0.008,
            "drugs": 0.01,
            "nudity": {
                "raw": 0.01,
                "safe": 0.98,
                "partial": 0.01
            },
            "offensive": {
                "prob": 0.01
            },
            "media": {
                "id": "med_56JCMRUYrpTgXEQvcNmHn",
                "uri": "image4.jpg"
            }
        },
        {
            "weapon": 0.006,
            "alcohol": 0.008,
            "drugs": 0.01,
            "nudity": {
                "raw": 0.01,
                "safe": 0.98,
                "partial": 0.01
            },
            "offensive": {
                "prob": 0.01
            },
            "media": {
                "id": "med_56JCzJSMK0jsrvJ6iGSqD",
                "uri": "image5.jpg"
            }
        }
    ]
}
                    
                

The structure of the moderation results will depend on the models you choose. You can have a look at the Model Reference for more details on the available models and the structure of the JSON response.

Any other needs?

See our full list of Image/Video models for details on other filters and checks you can run on your images and videos. You might also want to check our Text models to moderate text-based content: messages, reviews, comments, usernames...

Was this page helpful?