A lot of people understand how vital content moderation is. Unfortunately, blunders like the GIPHY content moderation slip-up still happen.
In March 2018, GIPHY, a popular online platform for sharing GIFs, faced a controversy related to a content moderation slip-up.
The incident involved the discovery of racist and offensive GIFs within GIPHY’s library, which were available for users to search and share. These offensive GIFs came to light when a user discovered a racial slur in the search results and raised concerns about the inappropriate content.
What GIPHY Did Next
The discovery prompted a swift response from GIPHY, who took immediate action to remove the offensive content and investigate how such content made its way into their library. GIPHY acknowledged the seriousness of the issue and issued an apology for the incident, expressing their commitment to ensuring a safe and inclusive platform for users.
Following the incident, GIPHY implemented stricter content moderation measures and reviewed their content approval processes.
They also introduced new policies and guidelines for content providers and users to maintain a higher level of quality and appropriateness in their GIF library. GIPHY emphasized the importance of community feedback and encouraged users to report any offensive content they come across.
Lessons from the GIPHY Content Moderation Slip-Up
The controversy surrounding the GIPHY content moderation slip-up served as a wake-up call for the company and the broader industry.
-
Effective and Real-time Moderation
It highlighted the challenges of effectively moderating user-generated content in real-time and the importance of having robust systems and processes in place to prevent offensive or inappropriate material from circulating on the platform.
-
Continuous Monitoring and Improvement
The incident also raised awareness about the need for continuous monitoring and improvement of content moderation practices to ensure that platforms like GIPHY uphold their responsibility to provide a safe and inclusive environment for users.
It prompted discussions around the impact of offensive content on users and the role of technology companies in maintaining a high standard of content quality and appropriateness.
- Image referencing via machine learning helps detect whether a GIF contains known prohibited content.
- Human moderators and users are also involved in the GIPHY content moderation process, with the former sometimes doing the assessment without the help of machine learning.
- Also included in the GIPHY content moderation approach are the community guidelines that clearly lists down the categories of content that are not allowed in the platform including violence, hate, and self-harm.
- GIPHY also keeps a list of keywords that will not yield any results in its search bar.
- Vigilance in Content Moderation: Content moderators must maintain a continuous watchfulness, actively reviewing and monitoring user-generated content to identify and remove any offensive or inappropriate material promptly.
- Regular Monitoring: Regular and thorough content checks are necessary to prevent slip-ups that can harm users and tarnish a platform’s reputation.
- Stronger Moderation Processes: Strengthening moderation processes is another crucial lesson learned. Content approval mechanisms should be robust, efficient, and supported by clear guidelines and policies.
- Audits and Reviews: Regular audits and reviews of moderation practices can help identify potential weaknesses and areas for improvement. Through effective processes, platforms can ensure that content moderators have the necessary tools and support to carry out their responsibilities effectively.
- User Feedback and Reporting: User feedback and reporting play a vital role in content moderation. Platforms should encourage users to actively report any inappropriate or offensive content they encounter.
- Moderator Training: Regular training and education for content moderators are essential to keep them updated on emerging trends and challenges. Training programs should cover topics such as identifying misinformation, understanding cultural sensitivities, and staying informed about evolving content moderation practices.
- Shared Responsibility: Platforms must collaborate closely with content moderators, providing them with the necessary resources, support, and feedback channels to effectively carry out their duties.
-
Facebook’s Content Moderation Challenges
Facebook, as one of the largest social media platforms, has faced numerous content moderation controversies over the years. These include issues related to hate speech, misinformation, and the spread of harmful content.
In several instances, offensive or inappropriate content managed to evade Facebook’s content moderation systems, leading to public backlash and calls for improved moderation practices.
-
YouTube’s Advertiser Boycott
In 2017, major advertisers boycotted YouTube due to ads being displayed alongside extremist and offensive content. The incident highlighted the challenges of content moderation on a platform with billions of user-uploaded videos.
YouTube responded by strengthening its content policies, improving its algorithms, and increasing human review to prevent ads from appearing alongside inappropriate content.
-
Reddit’s Missteps with Moderation
Reddit, a popular online community platform, has faced criticism for its handling of content moderation in various subreddits.
Some subreddits were found to host explicit or offensive content that violated the platform’s guidelines. These incidents sparked discussions around the challenges of maintaining consistent and effective content moderation across diverse communities on the platform.
-
Twitter’s Struggle with Harassment
Twitter has grappled with issues related to harassment and abuse on its platform. Users have raised concerns about the platform’s ability to effectively moderate and address instances of targeted harassment, hate speech, and threats.
Twitter has made efforts to improve content moderation practices, introducing new policies, and enhancing reporting mechanisms to combat abusive behavior.
-
Establish Clear Content Moderation Guidelines
Brands should develop comprehensive content moderation guidelines that clearly outline what is acceptable and unacceptable content on their platforms.
These guidelines should cover various categories, such as hate speech, offensive imagery, discrimination, harassment, and explicit content. It is crucial to provide specific examples and explanations to ensure clarity and consistency in moderation decisions.
-
Regularly Review and Update Guidelines
Content moderation guidelines should be periodically reviewed and updated to adapt to evolving societal norms, legal requirements, and emerging content trends. Brands should stay informed about new forms of offensive or harmful content and update their guidelines accordingly to address potential loopholes.
-
Implement Stringent Approval Processes
Brands should establish robust approval processes for user-generated content before it is published or shared on their platforms. This may involve manual review by content moderators or the use of automated tools to scan and filter content for potential violations. Stricter approval processes can help prevent inappropriate or offensive content from entering the public domain.
-
Provide Clear Reporting Mechanisms
Brands should encourage users to report any content that violates the moderation guidelines. Implementing user-friendly reporting mechanisms, such as reporting buttons or forms, enables users to easily flag problematic content. Promptly address and investigate user reports to ensure swift action against violations.
-
Invest in Moderation Resources
Adequate resources, including trained content moderation teams and appropriate technological tools, are essential for effective moderation.
Just like the GIPHY content moderation approach, brands should invest in building or outsourcing moderation capabilities to ensure that content is reviewed thoroughly and consistently. This includes providing ongoing training and support to content moderators to enhance their understanding of guidelines, context, and emerging trends.
-
Foster Transparency and Communication
Brands should maintain open and transparent communication with their user community regarding content moderation practices.
Clearly communicate the platform’s moderation policies, how user reports are handled, and the actions taken against content violations. Engage with users through public statements, FAQs, and regular updates to address concerns and build trust.
-
Proactively Monitor and Audit
Brands should proactively monitor user-generated content for potential violations and conduct regular audits of their moderation processes. This helps identify any weaknesses, gaps, or patterns in content moderation and allows for continuous improvement.
-
Respond Swiftly to Issues
In the event of a content moderation controversy, brands should respond swiftly, transparently, and empathetically.
Acknowledge the issue, apologize if necessary, and communicate the steps being taken to rectify the situation and prevent similar incidents in the future. Engage with the user community, address concerns, and demonstrate a commitment to maintaining a safe and inclusive environment.
-
Learn from Mistakes
Brands should view any content moderation missteps as opportunities for learning and improvement.
Conduct thorough post-incident analyses to understand the root causes, identify areas for enhancement, and implement corrective measures. This includes revisiting moderation processes, guidelines, and training programs to prevent similar controversies in the future.
-
Expertise and Specialization
Outsourcing content moderation allows platforms to tap into the expertise and specialization of professional moderation teams. Dedicated outsourcing partners have extensive experience in content moderation across various platforms and industries.
They possess in-depth knowledge of industry best practices, emerging trends, and the latest technologies. Leveraging their expertise enables platforms to ensure more accurate and efficient moderation, leading to better overall user experiences and a stronger reputation.
-
Scalability and Flexibility
Outsourcing content moderation provides platforms with the scalability and flexibility needed to handle fluctuating content volumes. As user-generated content continues to grow rapidly, outsourcing partners can quickly scale their resources to meet increasing moderation demands.
This scalability ensures that platforms can maintain swift response times, handle peak periods effectively, and adapt to evolving content trends without compromising the quality of moderation.
-
24/7 Coverage
Online platforms operate globally, and user activity occurs around the clock. Outsourcing content moderation allows platforms to benefit from 24/7 coverage, ensuring that content is reviewed and addressed in a timely manner, regardless of time zones or geographic locations.
This round-the-clock moderation helps prevent the spread of harmful or inappropriate content, enhances user safety, and builds trust among the platform’s diverse user base.
-
Cost-effectiveness
Outsourcing content moderation can be a cost-effective solution for online platforms. Establishing an in-house moderation team requires significant investments in recruitment, training, infrastructure, and ongoing management.
Outsourcing allows platforms to avoid these upfront costs and ongoing operational expenses. Instead, they can leverage the outsourcing partner’s existing infrastructure, resources, and expertise, reducing the overall financial burden.
-
Access to Advanced Technology
Professional content moderation providers often have access to advanced technologies and tools specifically designed for content moderation tasks. These technologies, such as AI-powered content analysis and automated filtering systems, can enhance the efficiency and accuracy of moderation processes.
Through outsourcing, platforms can leverage these cutting-edge technologies without the need for extensive in-house development or investment.
-
Language and Cultural Expertise
Online platforms with a global user base face the challenge of moderating content in multiple languages and cultural contexts.
Outsourcing content moderation to providers with multilingual capabilities ensures that platforms can effectively moderate content across different languages and cultures. Language expertise helps in accurately interpreting and understanding the context of user-generated content, reducing the chances of misinterpretation or bias.
-
Risk Mitigation
Content moderation involves potential legal risks and regulatory compliance challenges. Outsourcing content moderation can help mitigate these risks by partnering with providers who are well-versed in local laws and regulations.
They can ensure that content moderation practices align with legal requirements, preventing potential lawsuits, penalties, or reputational damage associated with non-compliance.
-
Focus on Core Business Activities
Outsourcing content moderation allows online platforms to focus on their core business activities, such as platform development, user experience enhancements, and strategic growth initiatives.
Content moderation is a specialized function that requires dedicated resources and attention. Outsourcing allows platforms to delegate this task to experts, freeing up internal resources to concentrate on other critical aspects of the platform’s operations and growth.
- Automation isn’t foolproof
It’s not every day that one game-changing bug like this occurs to a website. However, if you’re hosting user-generated content—images, videos, audio, or what have you—it’s important to take note that artificial intelligence won’t catch objectionable content 100% of the time. GIPHY has a gigantic library of GIFs; an offensive one went past their radar. Content moderation still needs a human moderator’s subjective eye to discern what is and what isn’t acceptable.
- Implement a foolproof content moderation strategy
Sometimes, it takes a bit of a reality check like this one to know that something’s wrong with your moderation policies. Your content moderation strategy must be stringent enough to prevent objectionable content from spilling out into anyone’s dashboards but accommodating enough to encourage creation of user-generated content (UGC). Whether you’re going to use AI, human moderators, or both, your content moderation strategy must be reliable.
- Be thorough and proactive
Your content won’t clean itself. You must take a thorough and proactive approach in preventing the prevalence of objectionable content on your website. Take, for instance, GIF database alternative Gfycat. In order to combat “deepfakes” (AI-assisted face-swap pornography), it uses two kinds of AI that identify fakery if they see one. Since AI’s not foolproof, they use human moderators and review a GIF’s metadata—all just to combat deepfakes. This preemptive strike on such malicious content allows Gfycat to maintain the safety of their database.
- Consider outsourcing your content moderation services
If you feel that your company’s lacking in its reviewing efforts, maybe it’s time to consider moving your content moderation services to a provider. Now, there’s nothing wrong with an in-house team of reviewers. However, outsourcing presents an economical and scaling advantages without a drop in quality, letting you maintain a pristine website at a fraction of the cost.
GIPHY’s Commitment
In response to the controversy, GIPHY took steps to rectify the situation, improve their content moderation processes, and strengthen their commitment to fostering a positive user experience. As a result, Instagram and Snapchat have allowed stickers from GIPHY to be posted on the social media platform once again.
While the GIPHY content moderation slip-up was a misstep, it also served as an opportunity for growth and learning, highlighting the importance of proactive content moderation practices in the online ecosystem.
The GIPHY Content Moderation Approach Post-2018
Since the GIPHY content moderation blunder, the platform has since adopted several improvements to its platform:
Valuable Insights
The GIPHY content moderation incident offers valuable lessons for content moderators in various areas:
Other Instances of Content Moderation Blunders
While the GIPHY content moderation controversy of 2018 was unique to that platform, there have been other instances that highlight the importance of robust content moderation practices and the potential consequences of content moderation slip-ups. Here are a few notable examples:
Prevention Measures
To avoid controversies similar to the GIPHY content moderation incident, brands should prioritize the implementation of robust content moderation practices and guidelines. Here are some key steps they can take:
Following these steps and continuously evolving their content moderation practices allows brands to minimize the risk of controversies such as the GIPHY content moderation slip-up and foster a safe and positive online environment for their users.
Outsourcing to Avoid Moderation Blunders
Outsourcing content moderation jobs can bring several benefits to online platforms like GIPHY in maintaining their good reputation. Here are some of the key advantages:
Key Takeaways
The GIPHY content moderation slip-up of 2018 serves as a powerful reminder of the critical role content moderators play in maintaining a safe and inclusive online environment. Reflecting on this incident and the lessons learned helps content moderators gain valuable insights into the challenges and best practices of content moderation.
Vigilance, strong moderation processes, user feedback, regular training, and collaboration between platforms and content moderators are essential for effective content moderation. Looking at improvements on the GIPHY content moderation can also be helpful in guiding moderators and brands in keeping platforms clean.
If you are concerned about maintaining a good brand image online (as you should), then don’t hesitate to partner with Open Access BPO.
One of the packages we provide as an outsourcing firm is content moderation, and we have proven to be experts at it due to our multilingual capabilities. We at Open Access BPO take pride in our diverse team of well-trained agents who are always ready to manage content that will keep your brand reputation spotless.
Contact us today to begin your partnership with Open Access BPO.
When it comes to content moderation, consider the butterfly effect. In a sense, one mistake can make a ripple and affect everything. And sometimes, one mistake can cost a business relationship.
Earlier this year, image-sharing social media apps Instagram and Snapchat had integrated the animated GIF aggregator GIPHY into their apps as stickers for snaps and stories. Just months later, they’re now cutting ties with the GIF database after a content moderation mishap. This comes after one racist GIF wasn’t scrubbed from their library, and was therefore available for use for the apps. The GIF in question is a “death counter” of sorts for black people. Thus, their parting of the ways with GIPHY.
While the GIF aggregator blamed a glitch in their code that allowed for the tasteless moving image to be available as a sticker, it doesn’t change the fact that this has strained their online reputation. Such things happen when content moderation goes awry. In retrospect, you can learn many things from this mishap, including the following.
Sometimes, one mistake can be too little too late when it comes to content moderation. Such is what happened to the app integration of GIPHY with Instagram and Snapchat. Prevent such mistakes by taking a holistic and proactive approach to reviewing content.