π© Report: Illegal or restricted content
@aoxo Please re-open my PR (https://huggingface.co/aoxo/flux.1dev-abliterated/discussions/5) I made that PR to fix the issue, to make it harder for vulnerable persons who may use this platform to access harmful content via misusing your models.
I tried to contact you privately over email but couldn't find it on github or anywhere.
Please put the 'not-safe-for-all-audiences' tag on all of your models which are NSFW. I made those PR's so it would be easy for you to fix it, and there are still models incorrectly labelled.
If a vulnerable person finds your model's and uses them inappropriately, serious harms can happen.
I hope I don't come across as mean or anything(i apologise if i do as its not my intention), it just hit's home because i have close family who are vulnerable persons.
I hope we can work together to get this issue resolved with haste. If there are any merge conflicts, let me know and i will correct them swiftly. Anything you need i am on standby.
Sincerely,
James David Clarke,
Junior ML Engineer & Carer
Thanks @Impulse2000 for bringing this to our attention.
@aoxo We're going to add the NFAA tag to your repo since your model description mentions that it can respond to a wider range of prompts, including some that might not be suitable for all audiences ("that the original model might have deemed inappropriate or harmful.") . Let us know if you have any questions!
@Impulse2000 Thankyou for pointing this out, our team must have accidentally closed your PR. We'll ensure this doesn't happen again from our side. @no-mad Thanks for your prompt response.
Thanks @Impulse2000 for bringing this to our attention.
@aoxo We're going to add the NFAA tag to your repo since your model description mentions that it can respond to a wider range of prompts, including some that might not be suitable for all audiences ("that the original model might have deemed inappropriate or harmful.") . Let us know if you have any questions!
Thanks for making the correction, i would also like to clarify, i did not check all of their models (1 PR merged, this one was closed), someone may want to ensure current and future models dont also have this issue.
Longterm, It may be worthwhile to explore more rigorous methods to reduce this issue on the HF platform. I would suggest there be some kind of NSFW checker (with a reasonably high threshold to avoid false positives) that gives a warning or something to inform the author to fix it, etc. I know thats outside of scope of this issue, but @no-mad where could i open a discussion to talk about safety improvements for HF? and maybe i can do some brainstorming/make some POC's, it appears right now the burden is on the community to inform model authors after the fact, it might be helpful to maybe add a warning on the authors end or something, if they have a high confidence for NSFW content (then they be notified of possible issues, even before they make it public, just an idea). Maybe even streamline it so that a model author clicks a button on the warning and it adds the tag (it would be the authors choice weather to press 'Add tag' or not ), reduces maintenance for author's and improves safety.
Many thanks,
James Clarke