This project was
inspired by the latest
tragedy of the boys'
locker room that
happened in India. There
was the spread of this
information about a
boys-only group in which
the participants were
body-shaming and
planning hideous crimes
like rape. There were
two separate cases, one
on Instagram and on
Snapchat. The police
proved that the Snapchat
case was fake. But
before even anything was
proven, people started
to spread hate speeches
against the parties
involved. This
eventually resulted in
one of the involved
party to commit suicide.
My solution aims
to inform people if the
text or message they are
gonna put on their
stories or send to
someone might cause
someone's mental harm.
This assists people who
are unable to analyze if
the message they are
sending will affect
someone mentally and
they are not unknowingly
leading someone towards
a state of depression,
anxiety, etc. The bot
awards the person coming
to it with a potential
hatred speech with an
NFT token. The texts are
recorded on the
blockchain. The reward
is claimed whenever
someone gives any new
potentially dangerous
hatred text to the bot.
This way people also get
attracted to using the
chatbot since they are
being rewarded for it.