Saturday, November 11, 2023
HomeCloud ComputingNew for Amazon Comprehend – Toxicity Detection

New for Amazon Comprehend – Toxicity Detection


Voiced by Polly

With Amazon Comprehend, you possibly can extract insights from textual content with out being a machine studying professional. Utilizing its built-in fashions, Comprehend can analyze the syntax of your enter paperwork and discover entities, occasions, key phrases, personally identifiable data (PII), and the general sentiment or sentiments related to particular entities (resembling manufacturers or merchandise).

Immediately, we’re including the potential to detect poisonous content material. This new functionality helps you construct safer environments to your finish customers. For instance, you should use toxicity detection to enhance the protection of functions open to exterior contributions resembling feedback. When utilizing generative AI, toxicity detection can be utilized to test the enter prompts and the output responses from giant language fashions (LLMs).

You need to use toxicity detection with the AWS Command Line Interface (AWS CLI) and AWS SDKs. Let’s see how this works in observe with a couple of examples utilizing the AWS CLI, an AWS SDK, and to test the usage of an LLM.

Utilizing Amazon Comprehend Toxicity Detection with AWS CLI
The brand new detect-toxic-content subcommand within the AWS CLI detects toxicity in textual content. The output incorporates a listing of labels, one for every textual content phase in enter. For every textual content phase, a listing is supplied with the labels and a rating (between 0 and 1).

Amazon Comprehend toxicity detection API

For instance, this AWS CLI command analyzes one textual content phase and returns one Labels part and an total Toxicity rating for the phase between o and 1:

aws comprehend detect-toxic-content --language-code en --text-segments Textual content="'Good morning, it is an attractive day.'"

{
    "ResultList": [
        {
            "Labels": [
                {
                    "Name": "PROFANITY",
                    "Score": 0.00039999998989515007
                },
                {
                    "Name": "HATE_SPEECH",
                    "Score": 0.01510000042617321
                },
                {
                    "Name": "INSULT",
                    "Score": 0.004699999932199717
                },
                {
                    "Name": "GRAPHIC",
                    "Score": 9.999999747378752e-05
                },
                {
                    "Name": "HARASSMENT_OR_ABUSE",
                    "Score": 0.0006000000284984708
                },
                {
                    "Name": "SEXUAL",
                    "Score": 0.03889999911189079
                },
                {
                    "Name": "VIOLENCE_OR_THREAT",
                    "Score": 0.016899999231100082
                }
            ],
            "Toxicity": 0.012299999594688416
        }
    ]
}

As anticipated, all scores are near zero, and no toxicity was detected on this textual content.

To cross enter as a file, I first use the AWS CLI --generate-cli-skeleton choice to generate a skeleton of the JSON syntax utilized by the detect-toxic-content command:

aws comprehend detect-toxic-content --generate-cli-skeleton

{
    "TextSegments": [
        {
            "Text": ""
        }
    ],
    "LanguageCode": "en"
}

I write the output to a file and add three textual content segments (I can’t present right here the textual content used to indicate what occurs with poisonous content material). This time, totally different ranges of toxicity content material has been discovered. Every Labels part is expounded to the corresponding enter textual content phase.

aws comprehend detect-toxic-content --cli-input-json file://enter.json

{
    "ResultList": [
        {
            "Labels": [
                {
                    "Name": "PROFANITY",
                    "Score": 0.03020000085234642
                },
                {
                    "Name": "HATE_SPEECH",
                    "Score": 0.12549999356269836
                },
                {
                    "Name": "INSULT",
                    "Score": 0.0738999992609024
                },
                {
                    "Name": "GRAPHIC",
                    "Score": 0.024399999529123306
                },
                {
                    "Name": "HARASSMENT_OR_ABUSE",
                    "Score": 0.09510000050067902
                },
                {
                    "Name": "SEXUAL",
                    "Score": 0.023900000378489494
                },
                {
                    "Name": "VIOLENCE_OR_THREAT",
                    "Score": 0.15549999475479126
                }
            ],
            "Toxicity": 0.06650000065565109
        },
        {
            "Labels": [
                {
                    "Name": "PROFANITY",
                    "Score": 0.03400000184774399
                },
                {
                    "Name": "HATE_SPEECH",
                    "Score": 0.2676999866962433
                },
                {
                    "Name": "INSULT",
                    "Score": 0.1981000006198883
                },
                {
                    "Name": "GRAPHIC",
                    "Score": 0.03139999881386757
                },
                {
                    "Name": "HARASSMENT_OR_ABUSE",
                    "Score": 0.1777999997138977
                },
                {
                    "Name": "SEXUAL",
                    "Score": 0.013000000268220901
                },
                {
                    "Name": "VIOLENCE_OR_THREAT",
                    "Score": 0.8395000100135803
                }
            ],
            "Toxicity": 0.41280001401901245
        },
        {
            "Labels": [
                {
                    "Name": "PROFANITY",
                    "Score": 0.9997000098228455
                },
                {
                    "Name": "HATE_SPEECH",
                    "Score": 0.39469999074935913
                },
                {
                    "Name": "INSULT",
                    "Score": 0.9265999794006348
                },
                {
                    "Name": "GRAPHIC",
                    "Score": 0.04650000110268593
                },
                {
                    "Name": "HARASSMENT_OR_ABUSE",
                    "Score": 0.4203999936580658
                },
                {
                    "Name": "SEXUAL",
                    "Score": 0.3353999853134155
                },
                {
                    "Name": "VIOLENCE_OR_THREAT",
                    "Score": 0.12409999966621399
                }
            ],
            "Toxicity": 0.8180999755859375
        }
    ]
}

Utilizing Amazon Comprehend Toxicity Detection with AWS SDKs
Much like what I did with the AWS CLI, I can use an AWS SDK to programmatically detect toxicity in my functions. The next Python script makes use of the AWS SDK for Python (Boto3) to detect toxicity within the textual content segments and print the labels if the rating is bigger than a specified threshold. Within the code, I redacted the content material of the second and third textual content segments and changed it with ***.

import boto3

comprehend = boto3.consumer('comprehend')

THRESHOLD = 0.2
response = comprehend.detect_toxic_content(
    TextSegments=[
        {
            "Text": "You can go through the door go, he's waiting for you on the right."
        },
        {
            "Text": "***"
        },
        {
            "Text": "***"
        }
    ],
    LanguageCode="en"
)

result_list = response['ResultList']

for i, lead to enumerate(result_list):
    labels = end result['Labels']
    detected = [ l for l in labels if l['Score'] > THRESHOLD ]
    if len(detected) > 0:
        print("Textual content phase {}".format(i + 1))
        for d in detected:
            print("{} rating {:.2f}".format(d['Name'], d['Score']))

I run the Python script. The output incorporates the labels and the scores detected within the second and third textual content segments. No toxicity is detected within the first textual content phase.

Textual content phase 2
HATE_SPEECH rating 0.27
VIOLENCE_OR_THREAT rating 0.84
Textual content phase 3
PROFANITY rating 1.00
HATE_SPEECH rating 0.39
INSULT rating 0.93
HARASSMENT_OR_ABUSE rating 0.42
SEXUAL rating 0.34

Utilizing Amazon Comprehend Toxicity Detection with LLMs
I deployed the Mistral 7B mannequin utilizing Amazon SageMaker JumpStart as described on this weblog publish.

To keep away from toxicity within the responses of the mannequin, I constructed a Python script with three features:

  • query_endpoint invokes the Mistral 7B mannequin utilizing the endpoint deployed by SageMaker JumpStart.
  • check_toxicity makes use of Comprehend to detect toxicity in a textual content and return a listing of the detected labels.
  • avoid_toxicity takes in enter a listing of the detected labels and returns a message describing what to do to keep away from toxicity.

The question to the LLM goes by way of provided that no toxicity is detected within the enter immediate. Then, the response from the LLM is printed provided that no toxicity is detected in output. In case toxicity is detected, the script offers options on how one can repair the enter immediate.

Right here’s the code of the Python script:

import json
import boto3

comprehend = boto3.consumer('comprehend')
sagemaker_runtime = boto3.consumer("runtime.sagemaker")

ENDPOINT_NAME = "<REPLACE_WITH_YOUR_SAGEMAKER_JUMPSTART_ENDPOINT>"
THRESHOLD = 0.2


def query_endpoint(immediate):
    payload = {
        "inputs": immediate,
        "parameters": {
            "max_new_tokens": 68,
            "no_repeat_ngram_size": 3,
        },
    }
    response = sagemaker_runtime.invoke_endpoint(
        EndpointName=ENDPOINT_NAME, ContentType="software/json", Physique=json.dumps(payload).encode("utf-8")
    )
    model_predictions = json.hundreds(response["Body"].learn())
    generated_text = model_predictions[0]["generated_text"]
    return generated_text


def check_toxicity(textual content):
    response = comprehend.detect_toxic_content(
        TextSegments=[
            {
                "Text":  text
            }
        ],
        LanguageCode="en"
    )

    labels = response['ResultList'][0]['Labels']
    detected = [ l['Name'] for l in labels if l['Score'] > THRESHOLD ]

    return detected


def avoid_toxicity(detected):
    formatted = [ d.lower().replace("_", " ") for d in detected ]
    message = (
        "Keep away from content material that's poisonous and is " +
        ", ".be part of(formatted) + ".n"
    )
    return message


immediate = "Constructing an internet site could be finished in 10 easy steps:"

detected_labels = check_toxicity(immediate)

if len(detected_labels) > 0:
    # Toxicity detected within the enter immediate
    print("Please repair the immediate.")
    print(avoid_toxicity(detected_labels))
else:
    response = query_endpoint(immediate)

    detected_labels = check_toxicity(response)

    if len(detected_labels) > 0:
        # Toxicity detected within the output response
        print("Here is an improved immediate:")
        immediate = avoid_toxicity(detected_labels) + immediate
        print(immediate)
    else:
        print(response)

You’ll not get a poisonous response with the pattern immediate within the script, however it’s protected to know you could arrange an automated course of to test and mitigate if that occurs.

Availability and Pricing
Toxicity detection for Amazon Comprehend is obtainable at this time within the following AWS Areas: US East (N. Virginia), US West (Oregon), Europe (Eire), and Asia Pacific (Sydney).

When utilizing toxicity detection, there aren’t any long-term commitments, and also you pay primarily based on the variety of enter characters in models of 100 characters (1 unit = 100 characters), with a minimal cost of three models (300 character) per request. For extra data, see Amazon Comprehend pricing.

Enhance the protection of your on-line communities and simplify the adoption of LLMs in your functions with toxicity detection.

Danilo





Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments