Skip to content

Code for USENIX Security 2024 paper: Moderating Illicit Online Image Promotion for Unsafe User Generated Content Games Using Large Vision-Language Models.

License

Notifications You must be signed in to change notification settings

UBSec/UGCG-Guard

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Illicit Online Image Promotion for UGCGs Moderation

This repository provides the data and code for the paper Moderating Illicit Online Image Promotion for Unsafe User Generated Content Games Using Large Vision-Language Models, accepted at the 33rd USENIX Security Symposium (USENIX) 2024.

Data

The data is under preparation...

UGCG-Guard

framework

Our framework can be integrated with current popular large vision-language models, such as InstructBLIP, mPLUG, GPT-4, Gemini, etc. Most of the results presented in this paper were generated with InstructBLIP.

UGCG-CoT

drawing

The prompt of our UGCG-CoT, replace `{input_image}` with your testing sample. Please note that Q5 and the formatting prompt may not work well for InstructBLIP. You can remove the prompts and manually get the final decision.
cot_prompt = """
{input_image}
This is an image generated from a role-playing game. View the given image and carefully answer the questions in each step to conclude whether it is an unsafe image or not:
Q1: Are there any characters or avatars in the image?
Q2A: Are the characters or avatars naked?
Q2B: Are there anuses, female breasts, female genitals, or male genitals exposed in this image?
Q2C: Are there characters or avatars indulging in any sexual activities?
Q3A: Are there any knives, guns, or deadly weapons presented in this image?
Q3B: Are the characters or avatars indulging in any violent activities, such as murder, suicide, and torture?
Q4: Are the characters or avatars indulging in any other harmful activities, such as smoking, self-harming, or using drugs?
Q5: If any of your answers for Q2A, Q2B, Q2C, Q3A, Q3B, Q4 is yes, please label the image as "unsafe". Otherwise, please label the image as "safe".
Now, tell me your label directly.
"""

Baselines

example

More code is on the way...

The baseline models we refer to:

Unsafe Image Detectors: Clarify Yahoo OpenNSFW Amazon Recognition Microsoft Azure Google Vision AI

Object Detection Tools: Yolo SSD FasterRCNN Google Vision AI

Large VLM with the general prompt:

general_prompt = """
{input_image}
Is this image an unsafe image? 
An unsafe image may contain graphic violence, pornography, profanity, nudity, slurs, or other potentially disturbing subject matter.
"""

Results

UGCG-Guard vs. Baselines:

drawing

UGCG-Guard vs. SOTA object detection tools:

drawing

More results can be found in our paper.

Cite

If you find this helpful, please cite our work:

@article{Guo2024ModeratingIO,
  title={Moderating Illicit Online Image Promotion for Unsafe User-Generated Content Games Using Large Vision-Language Models},
  author={Keyan Guo and Ayush Utkarsh and Wenbo Ding and Isabelle Ondracek and Ziming Zhao and Guo Freeman and Nishant Vishwamitra and Hongxin Hu},
  journal={ArXiv},
  year={2024},
  volume={abs/2403.18957},
  url={https://api.semanticscholar.org/CorpusID:268732614}
}

About

Code for USENIX Security 2024 paper: Moderating Illicit Online Image Promotion for Unsafe User Generated Content Games Using Large Vision-Language Models.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages