Frequently Asked Questions

Answers to the most frequently asked questions about the initiative and how it works

What is the Baligh Initiative?

The Baligh Initiative is a civic-tech project aimed at monitoring and combating violence and hate speech in the Syrian digital space. The initiative relies on combining automated text analysis with human review, and is based on research and legal expertise to provide accurate content classification and reliable reporting services.

How is the text analyzed?

Analysis begins with an AI model trained on Syrian dialects and context to determine the severity of the speech. In complex or sensitive cases, a specialized team reviews the results to ensure accuracy and minimize bias. This dual approach combines the speed of technology with the expertise of specialists.

Is my data safe when using the tool?

Yes. The initiative stores input texts to improve monitoring and analysis accuracy, without linking them to users or their identities. Data related to reports is stored with high privacy measures and used exclusively for legal and research purposes related to combating hate speech.

How can I report offensive content?

After analyzing the text, the classification result is shown to the user. If the content contains hate speech or dangerous incitement, a legal report can be generated according to the laws of the chosen country, whether inside or outside Syria. Reports are sent to competent authorities or can be downloaded and used directly in the reporting process.

What is hate speech? And how do we distinguish it from freedom of expression?

Hate speech is expression that targets an individual or group based on their religious, sectarian, gender, or regional identity, and is based on incitement, stigmatization, or dehumanization. Freedom of expression remains protected unless it turns into incitement or collective abuse. This distinction helps protect public debate without normalizing verbal violence.

Does the analysis rely entirely on AI?

No. AI is an assisting tool, not the final arbiter. The model performs the initial classification, while sensitive cases are reviewed by humans to ensure higher accuracy in understanding the context, language, and narratives prevalent in Syrian discourse.

What should I do if I think the system made a mistake?

You can submit a request for manual review. The text and context are re-read, and the classification is corrected if an error is proven. This process aims to build user trust and continuously develop the system.

What happens after submitting a formal report?

A legal report is generated containing the violating text, risk level analysis, and the legal basis related to the chosen country. It is then forwarded to the initiative's partners or downloaded directly by the user for use in reporting to competent authorities.

Do I need to create an account?

No. Everyone can use the analysis tool without registration. An account becomes useful for those who wish to save a record of reports or track them.

Can the tool be used for research or educational purposes?

Yes. The tool is used in training programs, research on digital media and civil peace, and in building anti-hate speech curricula. The initiative can provide technical or methodological support to interested parties.

Is the tool only for Syrians?

Although the tool is specifically developed for Syrian dialects and narratives, its use is available to anyone who wishes to understand hate speech in Arabic or monitor incitement patterns in similar contexts.

What type of content cannot be analyzed currently?

The system cannot analyze non-textual images, long videos, or audio recordings. Work is underway to add OCR and audio processing tools to expand the scope of analysis in the future.

Why should hate speech be reported?

Because hate speech is not just words, but a precursor to violence and division. Numerous experiences in conflict countries have shown that ignoring this speech leads to mass crimes and threatens civil peace. Reporting is a preventive practice aimed at protecting society and promoting accountability.

What should I do if I receive a direct threat?

It is advised to save full evidence (text, link, date), analyze the content via the tool, generate a legal report, and then inform competent authorities in the country of residence. You can also contact the initiative for technical or guidance support.

Who are the initiative's partners?

The initiative operates within a wide cooperation network including lawyers inside and outside Syria, civil society organizations, media institutions, and researchers in digital rights fields, allowing it to combine legal expertise, technical capability, and research knowledge in a comprehensive approach to hate speech.

How can I join or volunteer?

You can contact us via the 'Join Us' page. The initiative welcomes volunteers in the fields of monitoring, research, design, translation, and legal analysis.

Is the initiative a political entity?

No. The initiative is completely independent and does not belong to any political or partisan entity. Its goal is to curb hate speech, promote citizenship values, and build a safe and responsible digital space.