taking moderation seriously 1

Excelling at moderation to ensure a safe and happy user community

As our WOLF Qanawat app has grown over the last few years, we’ve been delighted to welcome users from many Arab-speaking countries who have discovered a creative, fun, and supportive community that offers experiences that excite, engage, and provide freedom for people to express themselves.

The app has evolved greatly since launching in 2020, thanks in part to the valuable feedback from our loyal community members. These are the people that let us know what they love, what they’d like to see more of, but also inform us of things that might need our extra attention.

For WOLF, ensuring the WOLF Qanawat app is a safe and happy place to connect is paramount and absolutely fundamental to what we do as a business, focussing on supporting the app community, ensuring their needs are met. We pride ourselves on having created a secure and welcoming environment, proudly demonstrated by the 98% retention rate of our core users year on year, and satisfaction rates from the app community and WOLF support teams that’s between 95% and 98%.

Our caring and dedicated team, made up of a professional in-house team and hundreds of volunteer users, ensures all app users are looked after once they have registered. Once a new user has registered, having agreed to the Terms of Service, Privacy Policy and Community Guidelines in Arabic, we ensure a harmonious community through moderation.

taking moderation seriously 2

It’s vital that users can trust us to support them, and we see that, if any problems may arise, community members are great at ensuring good behaviour in-app, letting other users know what’s expected. Positively, only a very small percentage of issues needs the involvement of the app’s professional Community Support team.

Fortunately, we have very few instances of behaviour that need intervention, but if we do, we use a number of AI and machine learning tools, and our dedicated 24/7 team keeping an eye on user behaviour, ensuring no unpleasant language or images are used. The moderation team responds immediately if severe issues are flagged, with resolution essentially never taking more than 24 hours. 

Community and Support Staff also keep a close eye on support channels, which are official in-app channels for any concerns or complaints. Volunteers, administrators and moderators from the community are also able to ban, remove and silence any users who don’t behave according to Community Guidelines. 

On very rare occasions, we may discover behaviour that doesn’t follow our codes of conduct, and we respond using a structured escalation path. Once the user has been flagged, they may receive a warning, a temporary block or an immediate ban, with repeat or severe offenders permanently removed from the app. We don’t tolerate this behaviour and will always protect our users from it, so, for identified offenders, we use device-linked restrictions, to ensure they don’t create a new account using the same mobile, tablet or laptop. We also use community-moderation tools to prevent repeat offenders from returning under a new identity.

taking moderation seriously 3

For any affected users, we prioritise care, with our Support Team reaching out, providing reassurance that action has been taken and ensuring they know how to use the blocking and reporting tools. Our users know they can always get in touch with the team for any other issues; making them feel safe is extremely important.

We also recognise that, for an app with so many users, negative experiences come at a reputational and financial cost to the business, especially in tightly-knit online communities, leading to churn and decreased trust. So for WOLF we do everything we can to ensure a safe, secure, happy environment, where respect is nurtured and expected and there is a culture of kindness.

What we have learned is that a safe community is an ongoing commitment and by giving users the tools to self-moderate, we build trust and a sense of ownership of the community among them. Using AI to help support our moderation can flag many situations, but human judgement is irreplaceable for fairness. 

That said, we see AI playing an even bigger role in the future, especially predictive moderation catching toxic behaviour patterns before they escalate. However, with the expansion of AI we must be aware of and mitigate any AI misuse, such as deepfakes or synthetic harmful content.

Moving forward, we know that growth makes this more difficult to detect without strong predictive tools, so we are likely to see the expansion of user-led moderation, where community leaders are empowered with more tools to support safety in real-time. As online communities evolve, user support is crucially important because retention depends on users feeling safe, having a sense of community belonging and ultimately finding somewhere they can relax and enjoy. At WOLF we will always strive to make this possible.

Share on

Facebook
Twitter
LinkedIn