Have the forum filter tell you what you said wrong

id: 726704

category: Suggestions

posts: 75

Sliverus Sliverus loading
Although the previous forum filter had several flaws, it did have one undeniable benefit that the current filter lacks: it would tell you what you said wrong. It would censor the exact word you said wrong, so you'd understand what you said that was bad.

With the new filter, there are people who keep asking why their forum post has “inappropriate language”. It's been fairly confusing lately.

My suggestion is that whenever you post an inappropriate comment, the filter will tell you it is inappropriate and will list the exact phrase that made it inappropriate in the first place.

For example:
Sorry, this post appears to include unsuitable language and will not be updated.

Here is/are the phrase(s) that led to this message: (…)

I know there is probably a suggestion for comment filters to tell you what you said wrong, but I don't want to merge topics, because this filter was put in place relatively recently and might be easier to fix than the comment filter. Also, the main site and the forums work vastly different from each other, so it might be a pain for the developers to do it all at once.
Support, goodness god it's so annoying

Although I'd prefer any inappropriate words be censored after the post is made
Dyanoa Dyanoa loading

ajskateboarder wrote:

Support, goodness god it's so annoying

Although I'd prefer any inappropriate words be censored after the post is made
Agreed, the new filter is annoying
XCartooonX XCartooonX loading
Support, I'd love to have this implemented in comments too (seriously, it gets REALLY annoying!)
I suggested this a while back and it got shut down. The example given was that some young, innocent child would unknowingly type something that contains a bad word, see the word spelled out for them, and then bam, they know a new swear word now. I don't agree with it, but that was the end of my suggestion.

That being said, I still think some variation of it is a good idea, so I wholeheartedly support.
glitcX glitcX loading
the filter made this not go through

“If you're talking about how people make really good thumbnail art. I would blur the background a bit, then have the main element in front, with the text, the text i use outline. and move it down a couple of times to make it look cool.”

EDIT: it was actually this:
https://cdn2(dot)scratch.mit.edu/ get_image/ gallery/ 34242772_1000x1000(dot)png
(remove spaces, and change “(dot)” to a period

Interstellar-TV wrote:

(#6)
The example given was that some young, innocent child would unknowingly type something that contains a bad word, see the word spelled out for them, and then bam, they know a new swear word now..
i see this as a double-edged sword. if the person is polite and they find out that the word they used is a swear word they will know not to say it. but if the parents have access to the users browsing history and see that the user has searched up a swear word the user might get into trouble.

most of the time, someone says something inappropriate conciously. if the user says a swear word or something inappropriate in the forums knowingly, they should know that scratch does not condemn the use of such language so theyll be educated to not use inappropriate language in the future on scratch.

but in the unlikely case its a spelling mistake, and the user goes to search up what the swear means, they'll learn the wrong stuff, especially if the word's meaning is related to human body parts i shall not speak of. however, this is a extreme case and isnt scratch's fault, its an accident and accidents happen.

sorry if the points i made sound like something a 3 year old would say, i havent used the forums in a year so i forgot how to make constructive points
Sliverus Sliverus loading

starhero5697 wrote:

but in the unlikely case its a spelling mistake, and the user goes to search up what the swear means, they'll learn the wrong stuff, especially if the word's meaning is related to human body parts i shall not speak of. so i have mixed feelings on this suggestion.
However, that's extremely rare. More often than not, when kids hear an inappropriate word, it's from someone else using the word and them hearing it. And at that point, it's not the Scratch Team's fault that the kid got exposed to the word, as long as it didn't happen on Scratch.

Y'know, there was once a time when I was in second grade, and accidentally covered up part of a word, and the part that wasn't covered up was the spelling of a swear word. I won't say what the word was, but some kids around me got concerned for trying to spell a bad word. In the end, I ended up learning it was a bad word, but I didn't get in trouble for it because it wasn't really anyone's fault; it was an accident.

In that same way, it isn't the Scratch Team's fault for trying to prevent users from making Scratch an unsafe place. It isn't anyone's fault; it's an accident.

Sliverus wrote:

(#9)
In that same way, it isn't the Scratch Team's fault for trying to prevent users from making Scratch an unsafe place. It isn't anyone's fault; it's an accident
great point. edited my post to reflect this statement.

another benefit about this suggestion is that maybe the filter detects something that actually isnt inappropriate and highlights something that isnt inappropriate and the user wont be confused on why the message was filtered. they can also let the scratch team know about the bug.
roofogato roofogato loading
Support. The people on the forum are generally seen as slightly more mature than main site users (i dont think lil 6 year old jimmy is gonna be frequenting the bugs and glitchs and talking and syntax and allat)

This would also be nice since the comment bugs thread in BaG is now down, and Contact us is probably gonna flood
Support. The new filter is annoying.
EDawg2011 EDawg2011 loading
Support. Imagine you try to post a 5,000-word essay, but it won't let you post it, so you have to spend hours examining the essay for anything that could've possibly triggered the filter. It could also say the part that triggered the filter. I also like Silverus' point about accidents.
Support. It's already implemented for comments and tells you what you did wrong after you get a message from the ST about your comment. But with ajskateboarder's reasoning, it should censor the words after you post it.
Bump (usually use my alt mainly for bumps )
usefun usefun loading
Support. This happens to me too, and I think “What's wrong with the post?”
Yeah! I like that idea! Maybe it could also apply to commenting!
Xzillox Xzillox loading
Would it eventually apply to commenting as well? I feel like since a lot less users (less than 1%) are on the forums than the main site it would be better restricted to only the forums.
Sliverus Sliverus loading

Xzillox wrote:

Would it eventually apply to commenting as well? I feel like since a lot less users (less than 1%) are on the forums than the main site it would be better restricted to only the forums.
I actually think a suggestion for that already exists. I personally wouldn't be opposed to it, but let's keep this discussion about the forum filter.
the common argument that comes up with this suggestion for the main site is that a user, probably a young child, may pick up a new bad word if they accidentally enter a bad word, and then the filter then tells the bad word.

But since that the forum community has an older audience and less off a wide-scope audience (less than 1% of Scratch Users use the forums), do you think this classic counter-argument still holds the same amount of merit?

starhero5697 wrote:

Interstellar-TV wrote:

(#6)
The example given was that some young, innocent child would unknowingly type something that contains a bad word, see the word spelled out for them, and then bam, they know a new swear word now..
i see this as a double-edged sword. if the person is polite and they find out that the word they used is a swear word they will know not to say it. but if the parents have access to the users browsing history and see that the user has searched up a swear word the user might get into trouble.
Nah that’s such a low chance that it isn’t worth bringing up

My take:

This + a message saying “if this seems to be a mistake please (contact us) to report it or for more information”
Would even be great as it would reduce the BAG topics about filter issues exposing even more people to bad words
Za-Chary Za-Chary loading

dertermenter wrote:

the common argument that comes up with this suggestion for the main site is that a user, probably a young child, may pick up a new bad word if they accidentally enter a bad word, and then the filter then tells the bad word.

But since that the forum community has an older audience and less off a wide-scope audience (less than 1% of Scratch Users use the forums), do you think this classic counter-argument still holds the same amount of merit?
Sure. The forum is a part of the Scratch website, and everything on the Scratch website is intended to be for all ages. Nothing on the Scratch website is intended to be age-restricted. (The typical forumer — at least the ones I see — are certainly older, but there are plenty of younger users on the forums, too.) Similarly, some lesser-known Scratcher may post an inappropriate project, and only 1-2 people would see it, but they should still use the Report button on the project.



As for the original suggestion, such a system should really just either work across the entire website, or not at all. If it only worked on the forums, then people would just go to the forums to paste their comments to determine what specifically is inappropriate (and vice versa for if it only worked on the main website).

I'd say that it would be better moderation-wise to not have this feature. Aside from the kids-see-bad-words argument, there are two other benefits to the existing system:
  • It makes users think more carefully about what to post before they actually try to post it. Thinking before you act/speak is generally a useful quality in life, and it is good to practice this on Scratch.

  • It is harder to post bad/borderline content. If you knew exactly what was wrong with the comment, you could just change that exact part slightly and then quickly post a new comment which — if your first attempt was a “bad” comment — your next attempt would sometimes be a “mostly bad” comment. Why make small changes in this sense when you could just try to reword your comment completely? (I don't actually know if I said this point in the best way, but I know that from a moderator's perspective it makes sense.)
Both of these are somewhat irrelevant if you're not intentionally trying to post bad comments, but here I am just trying to highlight the potential for abuse of this system. Regardless, typing by using good spelling and grammar usually goes a long way. I've made at least 300 forum posts over the course of the past month, and as far as I recall, there was only 1 time where I ran into filter troubles. Not gonna lie: it was kinda annoying. But that's 1 in 300. I think it's worth having the existing system to prevent abuse as opposed to eliminating minor inconveniences.
Sliverus Sliverus loading

dertermenter wrote:

the common argument that comes up with this suggestion for the main site is that a user, probably a young child, may pick up a new bad word if they accidentally enter a bad word, and then the filter then tells the bad word.

But since that the forum community has an older audience and less off a wide-scope audience (less than 1% of Scratch Users use the forums), do you think this classic counter-argument still holds the same amount of merit?
I've honestly never really liked that argument. Even if they don't know what's wrong with the comment, what they usually do instead is ask someone else – whether on Bugs and Glitches, or if they don't know what that is, they'll use Contact Us, ask a Scratch Team member, or make a project questioning why it was wrong.

My point being, children are incredibly curious. Not having the filter tell you what you did wrong – would not only prevent innocent users from figuring out the innocent phrase that got them in trouble in the first place, but it wouldn't even necessarily do its intended job in the first place; kids would ask what went wrong, and they'd find the swear word anyway.

Besides, for alerts, the Scratch Team still says, “Here are some comments that lead to this message” anyway. When a user says something inappropriate, even accidentally, the Scratch Team is still willing to call them out on it, and to be specific.

cookieclickerer33 wrote:

This + a message saying “if this seems to be a mistake please (contact us) to report it or for more information”
Would even be great as it would reduce the BAG topics about filter issues exposing even more people to bad words
Great idea.

Za-Chary wrote:

As for the original suggestion, such a system should really just either work across the entire website, or not at all.
I'm not opposed to that. However, I believe the comment filter suggestion has already been made somewhere. Would it be worth expanding my suggestion to include both?

Za-Chary wrote:

It makes users think more carefully about what to post before they actually try to post it.
Does it, though? For example, I'm careful about what I comment to ensure it's respectful, but I was continually tripped up by the filter when I said the number “3” and the suffix “ish” followed anywhere afterward in the comment. (I think this has been patched.) Regardless of how hard we try, many innocent users are going to be filtered with little explanation unless we add an explanation feature.

Za-Chary wrote:

If you knew exactly what was wrong with the comment, you could just change that exact part slightly and then quickly post a new comment which — if your first attempt was a “bad” comment — your next attempt would sometimes be a “mostly bad” comment.
I'm not sure you understand the suggestion entirely. (I might be wrong on this, so please correct me if I am.) If you make an inappropriate comment, you'll still be muted, along with the explanation feature. You can't simply “quickly post a new comment” like that.

Sliverus wrote:

dertermenter wrote:

the common argument that comes up with this suggestion for the main site is that a user, probably a young child, may pick up a new bad word if they accidentally enter a bad word, and then the filter then tells the bad word.

But since that the forum community has an older audience and less off a wide-scope audience (less than 1% of Scratch Users use the forums), do you think this classic counter-argument still holds the same amount of merit?
I've honestly never really liked that argument. Even if they don't know what's wrong with the comment, what they usually do instead is ask someone else – whether on Bugs and Glitches, or if they don't know what that is, they'll use Contact Us, ask a Scratch Team member, or make a project questioning why it was wrong.

My point being, children are incredibly curious. Not having the filter tell you what you did wrong – would not only prevent innocent users from figuring out the innocent phrase that got them in trouble in the first place, but it wouldn't even necessarily do its intended job in the first place; kids would ask what went wrong, and they'd find the swear word anyway.
This I disagree with. I also think that many children may be confused what they said was wrong, but they may think that the Scratch Team knows best and that there is something not right with the comment, and then may try to fix the comment to avoid the filter or not post the comment at all.

Furthermore, other children may not think much of it and just retype the comment like it's no big deal - I never really understood the reasoning of anything when I was young so I never would have thought anything of it, and instead I would just type a new comment.

Lastly, if a child is curious, which admittely many of them are, they can just get “can't tell you” for an answer if they forward on the situation to Bugs and Glitches, Contact us etc, then the case will be closed.
Sliverus Sliverus loading

dertermenter wrote:

This I disagree with. I also think that many children may be confused what they said was wrong, but they may think that the Scratch Team knows best and that there is something not right with the comment, and then may try to fix the comment to avoid the filter or not post the comment at all.

Furthermore, other children may not think much of it and just retype the comment like it's no big deal - I never really understood the reasoning of anything when I was young so I never would have thought anything of it, and instead I would just type a new comment.
This argument is made with the assumption that kids don't share their “false” mutes and complain about the filter. But it happens every day.

dertermenter wrote:

Lastly, if a child is curious, which admittely many of them are, they can just get “can't tell you” for an answer if they forward on the situation to Bugs and Glitches, Contact us etc, then the case will be closed.
Not exactly. I've talked with many users outside the foruming community – users who aren't as familiar with why the moderation system is necessary. If someone says there's something wrong but won't say it, they'll ask anyway. And there will always be someone who says, “Well, the word s4nd is actually a bad word that means (…).”

Let me also ask you this: How would they accidentally type a really, really bad word? Most bad words are pretty hard to misspell. This might rarely happen, but let's be realistic. If the Scratch Team is giving alerts manually, they're willing to specifically call users out on what they said wrong. They will say, “Here are some comments that led to this message.”

Sliverus wrote:

My point being, children are incredibly curious. Not having the filter tell you what you did wrong – would not only prevent innocent users from figuring out the innocent phrase that got them in trouble in the first place, but it wouldn't even necessarily do its intended job in the first place; kids would ask what went wrong, and they'd find the swear word anyway.

Besides, for alerts, the Scratch Team still says, “Here are some comments that lead to this message” anyway. When a user says something inappropriate, even accidentally, the Scratch Team is still willing to call them out on it, and to be specific.
However, its absolutely not OK for a site aimed at all ages to be directly telling or showing a child a bad word.
It would be the difference between the police ignoring you while you stole something vs the police helping you steal something, although what is achieved is the same, due to the method which it is achieved, the second is definitely worse than the first.
Sliverus Sliverus loading

yadayadayadagoodbye wrote:

However, its absolutely not OK for a site aimed at all ages to be directly telling or showing a child a bad word.
It would be the difference between the police ignoring you while you stole something vs the police helping you steal something, although what is achieved is the same, due to the method which it is achieved, the second is definitely worse than the first.
I'm not sure the analogy works here but I'm not quite sure how to explain my reasoning. So I'll digress.

The Scratch Team already does this via manual alerts. Either remove the “here are some recent comments that lead to this message”, or my preferred solution: have the forum filter tell you what you said wrong.

starhero5697 wrote:

Interstellar-TV wrote:

(#6)
The example given was that some young, innocent child would unknowingly type something that contains a bad word, see the word spelled out for them, and then bam, they know a new swear word now..
i see this as a double-edged sword. if the person is polite and they find out that the word they used is a swear word they will know not to say it. but if the parents have access to the users browsing history and see that the user has searched up a swear word the user might get into trouble.
This specifically I disagree with. It doesn't matter how polite a young child is, if they don't know how offensive a word is, they might say it. And that is something I can't bear the thought of.

Something I would never like to happen to me would be the filter taking a bad word hidden in my post and throwing it back in my face.

Really though, I just don't see how you can benefit much from this. I suppose it would make it easier to tell which sentence to rephrase, but what if I break the community down into three types of users: Children who shouldn't see a bad word just because they accidently wrote one, trolls who know good and well what they said was wrong, and users who don't care about seeing bad words. Only one of the three types would benefit from this, while the other two are harmed/don't benefit from it. That's why I don't think this is worth it.

Sliverus wrote:

The Scratch Team already does this via manual alerts. Either remove the “here are some recent comments that lead to this message”, or my preferred solution: have the forum filter tell you what you said wrong.
Does the Scratch Team underline bad words hidden in your comment? I don't believe they do, but if they do they should stop. It's not OK to shove a swear word in a child's face just because they accidently had one hidden in between to words in their comment.
usefun usefun loading
Support, it is so annoying to get a message like that and think “What did I do wrong?”

Sliverus wrote:

dertermenter wrote:

This I disagree with. I also think that many children may be confused what they said was wrong, but they may think that the Scratch Team knows best and that there is something not right with the comment, and then may try to fix the comment to avoid the filter or not post the comment at all.

Furthermore, other children may not think much of it and just retype the comment like it's no big deal - I never really understood the reasoning of anything when I was young so I never would have thought anything of it, and instead I would just type a new comment.
Let me also ask you this: How would they accidentally type a really, really bad word?
By a copypasta, a big mispelling, or by smashing some random keys? Also, I think this is beside the point - it may be rare for it to happen, but it definitely can happen, meaning the counterargument still stands strong for me.
If the Scratch Team is giving alerts manually, they're willing to specifically call users out on what they said wrong. They will say, “Here are some comments that led to this message.”
Well actual very bad words should be blocked by the filter meaning the user cannot post the bad words, which also means the user cannot get alerted for something they never posted. So an alert should never give a user a new very bad word since that bad word would have blocked the comment from being posted.
kkidslogin kkidslogin loading

dertermenter wrote:

(#24)

Sliverus wrote:

dertermenter wrote:

the common argument that comes up with this suggestion for the main site is that a user, probably a young child, may pick up a new bad word if they accidentally enter a bad word, and then the filter then tells the bad word.

But since that the forum community has an older audience and less off a wide-scope audience (less than 1% of Scratch Users use the forums), do you think this classic counter-argument still holds the same amount of merit?
I've honestly never really liked that argument. Even if they don't know what's wrong with the comment, what they usually do instead is ask someone else – whether on Bugs and Glitches, or if they don't know what that is, they'll use Contact Us, ask a Scratch Team member, or make a project questioning why it was wrong.

My point being, children are incredibly curious. Not having the filter tell you what you did wrong – would not only prevent innocent users from figuring out the innocent phrase that got them in trouble in the first place, but it wouldn't even necessarily do its intended job in the first place; kids would ask what went wrong, and they'd find the swear word anyway.
This I disagree with. I also think that many children may be confused what they said was wrong, but they may think that the Scratch Team knows best and that there is something not right with the comment, and then may try to fix the comment to avoid the filter or not post the comment at all.

Furthermore, other children may not think much of it and just retype the comment like it's no big deal - I never really understood the reasoning of anything when I was young so I never would have thought anything of it, and instead I would just type a new comment.

Lastly, if a child is curious, which admittely many of them are, they can just get “can't tell you” for an answer if they forward on the situation to Bugs and Glitches, Contact us etc, then the case will be closed.
Since when has anybody ever on Scratch has ever thought that the ST knows best? The riots that happened after Studio and Purple (And 3.0, for some people—although I may be misremembering) clearly show that that's not how people who disagree with the ST usually think. Even people who didn't take to the streets to protest the changes were often secretly or insecretly very unhappy with the changes, and even expected them to be reverted. You can even see this with continued petitions, leavings, and protests about purple, the new forum filter, Roblox, the Purple App, etc.

They might edit the comment to avoid the filter, but most people will not do so just because “the ST knows best.” This is most certainly not something that is across most of Scratch's user base.

Heck, I wouldn't be surprised if many don't even know who the Scratch Team are.
Sliverus Sliverus loading

dertermenter wrote:

By a copypasta, a big mispelling, or by smashing some random keys? Also, I think this is beside the point - it may be rare for it to happen, but it definitely can happen, meaning the counterargument still stands strong for me.
Fair point. I have nothing to add.

dertermenter wrote:

Well actual very bad words should be blocked by the filter meaning the user cannot post the bad words, which also means the user cannot get alerted for something they never posted. So an alert should never give a user a new very bad word since that bad word would have blocked the comment from being posted.
Two issues:
  1. I've seen users get away with it all the time, actually. Like when they say sv1c1de to get away with it. And you're right, copypastas, misspellings, etc. exist – so it is definitely possible for a Scratcher to accidentally post a bad word and get an alert specifically stating that word isn't allowed. (And if we're talking about copypastas, and also repeating what other users have said, then it's perfectly logical that this happens often.)
  2. What about when the filter has issues and stops working? It has had issues several times. Although it's rare, you refuted my argument with, "it may be rare for it to happen, but it definitely can happen, meaning the counterargument still stands strong for me." In other words, as long as it is possible that a user can post a horrible word and get an alert about it from the Scratch Team, then users getting alerts with bad words is a real thing and the Scratch Team likely doesn't have a problem with this.

Sliverus wrote:

dertermenter wrote:

Well actual very bad words should be blocked by the filter meaning the user cannot post the bad words, which also means the user cannot get alerted for something they never posted. So an alert should never give a user a new very bad word since that bad word would have blocked the comment from being posted.
Two issues:
  1. I've seen users get away with it all the time, actually. Like when they say sv1c1de to get away with it. And you're right, copypastas, misspellings, etc. exist – so it is definitely possible for a Scratcher to accidentally post a bad word and get an alert specifically stating that word isn't allowed. (And if we're talking about copypastas, and also repeating what other users have said, then it's perfectly logical that this happens often.)
  2. What about when the filter has issues and stops working? It has had issues several times. Although it's rare, you refuted my argument with, "it may be rare for it to happen, but it definitely can happen, meaning the counterargument still stands strong for me." In other words, as long as it is possible that a user can post a horrible word and get an alert about it from the Scratch Team, then users getting alerts with bad words is a real thing and the Scratch Team likely doesn't have a problem with this.
The word you mentioned in your first point I wouldn't really say is a “very bad word”, its more blocked since that word brings up a topic that is too sensitive/dark for a community like Scratch that young children use. So I don't really think that can be classified as a “really bad word” a child can pick up.

For the second point, I don't think those two examples are very comparable - a child picking up a new slur from the filter telling them it's a bad word could happen at any point if this suggestion is implemented - the filter having an issue comes from a bug, which are very rare, meaning 99% of the time your point cannot stand since the filter would be working.
Sliverus Sliverus loading

dertermenter wrote:

The word you mentioned in your first point I wouldn't really say is a “very bad word”, its more blocked since that word brings up a topic that is too sensitive/dark for a community like Scratch that young children use. So I don't really think that can be classified as a “really bad word” a child can pick up.
That was just an example. I'm not going to use a substitution of a swear word in my post. I just used an example of another subject we're not allowed to discuss on Scratch. But for the sake of the argument, imagine it is a swear word.

dertermenter wrote:

For the second point, I don't think those two examples are very comparable - a child picking up a new slur from the filter telling them it's a bad word could happen at any point if this suggestion is implemented - the filter having an issue comes from a bug, which are very rare, meaning 99% of the time your point cannot stand since the filter would be working.
I see. But now we're in a situation. It might be a bit complicated, but let me try to explain.

Your point about these situations being rare is a double-edged sword. You could use it against my point about the filter being down. However, when I asked you earlier about how someone would accidentally type a bad word, you said:

dertermenter wrote:

By a copypasta, a big mispelling, or by smashing some random keys? Also, I think this is beside the point - it may be rare for it to happen, but it definitely can happen, meaning the counterargument still stands strong for me.
This is self-admittedly a rare occurrence. I mean, what are the odds of pressing buttons and accidentally typing an exact swear word? This doesn't happen. The chances of that are astronomically low.

However, the basis for your argument is entirely riding on the idea of users pressing buttons and accidentally typing in a swear word. So in other words, if you want to refute my example for being a rare occurrence, I can also refute the basis of your entire argument for the exact same reason.

Does that make sense? I can do my best to clarify if I need to.

Sliverus wrote:

Does that make sense? I can do my best to clarify if I need to.
Yes, I just think my argument just has more merit to it since my example, whilst being rare, can happen at any time on Scratch - its an open window.

On the other hand, your example, can only happen on the rare occasion that the filter is down - and even then people who abuse this would already know the swear words they are typing - so an alert would not teach them any new very bad words, which is the main drawback of the topic.
Sliverus Sliverus loading

dertermenter wrote:

Yes, I just think my argument just has more merit to it since my example, whilst being rare, can happen at any time on Scratch - its an open window.

On the other hand, your example, can only happen on the rare occasion that the filter is down - and even then people who abuse this would already know the swear words they are typing - so an alert would not teach them any new very bad words, which is the main drawback of the topic.
First of all, who's to say they can't accidentally type one in, like you said, while the filter is down?

But more realistically, let's say “dog” is a bad word. When the filter is down, TONS of users abuse the lack of a filter. In fact, some users will even post words on their profile to test if the filter is down. There was a horrible situation like this that happened before while you were banned. Griffpatch's profile – or really, any popular community – was filled with inappropriate content.

Now imagine a user (let's call him “Harry”) sees someone call someone a “dog”. Harry asks, “Dog? What does that mean?” Later, Harry gets an alert for this comment explaining that he said a swear word. Thus, Harry learns the existence of bad words and the Scratch Team takes part of the fault. Or – which is the point I'm trying to make – it is not the Scratch Team's fault when they tell a user they can't say certain bad words.

Another point I want to mention is what you said about users using copypastas. The character limit is relatively small, so it's very reasonable to assume that the user saw the bad word and that it wasn't the Scratch Team's doing. Not only that, but can you give me an example of a time when someone would purposely post a copypasta without knowing anything that's in it? What's the point? This doesn't happen.

The point being: If a Scratcher is experienced to the bad word and it's not the Scratch Team's fault, it's completely reasonable for the Scratch Team to call the user out for it.

Sliverus wrote:

dertermenter wrote:

Yes, I just think my argument just has more merit to it since my example, whilst being rare, can happen at any time on Scratch - its an open window.

On the other hand, your example, can only happen on the rare occasion that the filter is down - and even then people who abuse this would already know the swear words they are typing - so an alert would not teach them any new very bad words, which is the main drawback of the topic.
First of all, who's to say they can't accidentally type one in, like you said, while the filter is down?

But more realistically, let's say “dog” is a bad word. When the filter is down, TONS of users abuse the lack of a filter. In fact, some users will even post words on their profile to test if the filter is down. There was a horrible situation like this that happened before while you were banned. Griffpatch's profile – or really, any popular community – was filled with inappropriate content.

Now imagine a user (let's call him “Harry”) sees someone call someone a “dog”. Harry asks, “Dog? What does that mean?” Later, Harry gets an alert for this comment explaining that he said a swear word. Thus, Harry learns the existence of bad words and the Scratch Team takes part of the fault. Or – which is the point I'm trying to make – it is not the Scratch Team's fault when they tell a user they can't say certain bad words.

Another point I want to mention is what you said about users using copypastas. The character limit is relatively small, so it's very reasonable to assume that the user saw the bad word and that it wasn't the Scratch Team's doing. Not only that, but can you give me an example of a time when someone would purposely post a copypasta without knowing anything that's in it? What's the point? This doesn't happen.

The point being: If a Scratcher is experienced to the bad word and it's not the Scratch Team's fault, it's completely reasonable for the Scratch Team to call the user out for it.
First point: That's what I am trying to say, a user could mistype a word when the filter is down, meaning an alert could give the user a new bad word. We've already established that mistyping a word into a swear word is rare, and so is the filter being down.

However, to gain a new bad word from an alert, you need 2 rare occurences to happen: The filter to be down, and for the user to mistype a bad word.

On the other hand, to gain a new bad word from the filter, you only need the latter occurence to happen - a user to mistype a bad word.

Lets create an example: There is a 0.1% chance a user, who knows 0 bad words, mistypes a word into a swear word they don't know. From this, the filter tells what word was wrong, meaning the user learns a new bad word. So there is a 0.1% chance of a user learning a new bad word from this example.

Now, lets also say that the filter is down on Scratch 0.1% of the time. There is still a 0.1% chance a user types accidentally types a swear word, but since the filter is down, an alert will tell them what they said wrong. Since there is a 0.1% for both the filter being down and a user accidentally typing a bad word, overall the chances of a user learning a new swear word from an alert is 0.01%.

Whilst these statistics are hypothetical, it shows that, whilst the odds for a user picking up a new swear word via the filter is low, the odds are even lower for a user picking bad words up by an alert.

Your second point is a good point, however I think we need to realise how rare a filterbot outage is - there is a reason why the filter outage on 4th April 2022 has its own wiki page since its such a rare occurence (and also how important the filter is, but if this happened reguarly there wouldn't be articles about specific instances)
Catzcute4 Catzcute4 loading
We should have this all across the site, because otherwise a kid will post a comment, it will get filtered and the kid will get mutes, and then the kid may never go on scratch again
Za-Chary Za-Chary loading

Elijah999999 wrote:

Sliverus wrote:

The Scratch Team already does this via manual alerts. Either remove the “here are some recent comments that lead to this message”, or my preferred solution: have the forum filter tell you what you said wrong.
Does the Scratch Team underline bad words hidden in your comment? I don't believe they do, but if they do they should stop. It's not OK to shove a swear word in a child's face just because they accidently had one hidden in between to words in their comment.
They don't underline it, per se, but in the case of showing a user's comment right back at them, that's really only done when it's clear that the user knows what the word means. If a Scratcher explicitly writes a bad word, they're likely to already know that it's bad, so showing them their explicit comment isn't doing much damage.

The argument of “If the filter shows what is bad, then the user could learn a bad word” really only applies at times when the filter happens to find something inappropriate that is otherwise appropriate. Like if “cat” was a bad word and someone types "You can learn magic at the wizard's house." In this case it would be reasonable to be worried about what the filter shows.

Sliverus wrote:

When the filter is down
I don't know why we're discussing what could happen when the filter is down when that has happened exactly 1 time in the past 16 years. To discuss a proposal for a feature in case the filter goes down seems like “doomsday speak.”
Catzcute4 Catzcute4 loading

Za-Chary wrote:

It makes users think more carefully about what to post before they actually try to post it. Thinking before you act/speak is generally a useful quality in life, and it is good to practice this on Scratch.
What if you didn’t know that there was a bad word? Pointing it out could teach you that it’s a bad word.

Za-Chary wrote:

It is harder to post bad/borderline content. If you knew exactly what was wrong with the comment, you could just change that exact part slightly and then quickly post a new comment which — if your first attempt was a “bad” comment — your next attempt would sometimes be a “mostly bad” comment. Why make small changes in this sense when you could just try to reword your comment completely? (I don't actually know if I said this point in the best way, but I know that from a moderator's perspective it makes sense.)
Well, we should really just fix the filter, as it already gives out false positives, and the community really doesn’t like the false positives.
Za-Chary Za-Chary loading

Catzcute4 wrote:

Well, we should really just fix the filter, as it already gives out false positives, and the community really doesn’t like the false positives.
The filter is continually being fixed every single day. The filter is not perfect and never will be perfect. The Scratch Team does not intentionally put false positives into the filter, and in fact they would eliminate all false positives if it were possible to do so.
doggy_boi1 doggy_boi1 loading

Sliverus wrote:

dertermenter wrote:

Yes, I just think my argument just has more merit to it since my example, whilst being rare, can happen at any time on Scratch - its an open window.

On the other hand, your example, can only happen on the rare occasion that the filter is down - and even then people who abuse this would already know the swear words they are typing - so an alert would not teach them any new very bad words, which is the main drawback of the topic.
First of all, who's to say they can't accidentally type one in, like you said, while the filter is down?

But more realistically, let's say “dog” is a bad word. When the filter is down, TONS of users abuse the lack of a filter. In fact, some users will even post words on their profile to test if the filter is down. There was a horrible situation like this that happened before while you were banned. Griffpatch's profile – or really, any popular community – was filled with inappropriate content.
You can bypass any filter whether it's down or not, just like how people bypass the filter for dinscord
Sliverus Sliverus loading

dertermenter wrote:

snip
Your first point is a point. However, even the possibility of someone on Scratch being exposed to swear words, should never exist. One user being exposed to inappropriate content could be enough to get parents complaining, creating an article painting the Scratch Team in a negative light, and other parents/schools boycotting Scratch.

dertermenter wrote:

Your second point is a good point, however I think we need to realise how rare a filterbot outage is - there is a reason why the filter outage on 4th April 2022 has its own wiki page since its such a rare occurence (and also how important the filter is, but if this happened reguarly there wouldn't be articles about specific instances)
True, but I'm not referring to the filterbot outage for my second point. I mean in general, if the character limit in a Scratch comment is relatively small, and the user realistically knew what was in the copypasta, isn't it reasonable to assume that the user had experienced this swear word? And because any user that posts swear words has already seen the swear word, then it would be perfectly fine for the Scratch Team to call them out on it.

Za-Chary wrote:

I don't know why we're discussing what could happen when the filter is down when that has happened exactly 1 time in the past 16 years. To discuss a proposal for a feature in case the filter goes down seems like “doomsday speak.”
See my response to Dert's first quote in this post.

I thought of a compromise, by the way. What are your thoughts on this? If a user posts a word that is outright disallowed (like birth control things, for example), the filter will tell you what is wrong with the post. However, some topics are incredibly long (.e.g. stickies) and trying to edit them after massive revisions can be extremely frustrating. So if it is only subtly inappropriate (e.g. the cat example by @Za-Chary), the filter will tell you the sentence that contains the subtly inappropriate material.

Thoughts?
doggy_boi1 doggy_boi1 loading

Sliverus wrote:

I thought of a compromise, by the way. What are your thoughts on this? If a user posts a word that is outright disallowed (like birth control things, for example), the filter will tell you what is wrong with the post. However, some topics are incredibly long (.e.g. stickies) and trying to edit them after massive revisions can be extremely frustrating. So if it is only subtly inappropriate (e.g. the cat example by @Za-Chary), the filter will tell you the sentence that contains the subtly inappropriate material.

Thoughts?
I might've missed something but why would you tell them which sentence it's in when at that point you might as well just tell the user what word it is?? It'd be a bigger hassle to find the word in a sentence. (don't even get me started on people who don't use punctuation)
Sliverus Sliverus loading

doggy_boi1 wrote:

I might've missed something but why would you tell them which sentence it's in when at that point you might as well just tell the user what word it is?? It'd be a bigger hassle to find the word in a sentence.
It doesn't say what the bad word you said was, but it can help you find what to remove in order to be able to make the revision to your topic. Finding the word itself should usually be fine.

doggy_boi1 wrote:

(don't even get me started on people who don't use punctuation)
Why would someone not use punctuation in a very long post like a sticky? The only way I can imagine someone wouldn't use punctuation in a very long post like that is if they're spamming, and at that point it's obvious what's triggering the filter.
Za-Chary Za-Chary loading

Sliverus wrote:

However, even the possibility of someone on Scratch being exposed to swear words, should never exist. One user being exposed to inappropriate content could be enough to get parents complaining, creating an article painting the Scratch Team in a negative light, and other parents/schools boycotting Scratch.
In that case, it sounds like you don't support your own suggestion. If the filter tells you what you said wrong, there is a possibility that it would expose someone to a swear word.

Sliverus wrote:

I thought of a compromise, by the way. What are your thoughts on this? If a user posts a word that is outright disallowed (like birth control things, for example), the filter will tell you what is wrong with the post. However, some topics are incredibly long (.e.g. stickies) and trying to edit them after massive revisions can be extremely frustrating. So if it is only subtly inappropriate (e.g. the cat example by @Za-Chary), the filter will tell you the sentence that contains the subtly inappropriate material.
This assumes that it is easy to have the filter determine whether an offense is “outright disallowed” or “subtly inappropriate” — I don't think that would be easy, especially given the amount of false positives that exist. It also assumes that the Scratch Team can go back into the filter blacklist and determine specifically which words and phrases count as “outright disallowed” versus “subtly inappropriate,” which is definitely not easy.
doggy_boi1 doggy_boi1 loading

Sliverus wrote:

It doesn't say what the bad word you said was, but it can help you find what to remove in order to be able to make the revision to your topic. Finding the word itself should usually be fine.
But why would they do that when you can just tell them what the word was. also see ZA's post above


Sliverus wrote:

doggy_boi1 wrote:

(don't even get me started on people who don't use punctuation)
Why would someone not use punctuation in a very long post like a sticky? The only way I can imagine someone wouldn't use punctuation in a very long post like that is if they're spamming, and at that point it's obvious what's triggering the filter.
because some people just don't. I wouldn't know why, ask them
edit: I messed up the quoting

Sliverus wrote:

I thought of a compromise, by the way. What are your thoughts on this? If a user posts a word that is outright disallowed (like birth control things, for example), the filter will tell you what is wrong with the post. However, some topics are incredibly long (.e.g. stickies) and trying to edit them after massive revisions can be extremely frustrating. So if it is only subtly inappropriate (e.g. the cat example by @Za-Chary), the filter will tell you the sentence that contains the subtly inappropriate material.
Whether the filter would know if something is “subtly inappropriate” or “outright disallowed”, as Za-Chary stated, would be near-impossible with the current system. That would require the filter to have context on what topics are considered inappropriate and to what degree they are