Post-moderation is when all content which is submitted by users is published but reviewed retrospectively by online moderators. Content can then be edited or removed if it does not conform to guidelines.
Post-moderation is a good halfway option for brands who feel their communities would be low-risk but still want to protect the brand reputation online and improve the experience for their users or community by checking for and removing inappropriate and offensive content.
Post-moderation can be aided with automated software, like keyword filters, but still requires human moderation to ensure a decent level of protection and quality control.
Negative perceptions of post-moderation are around the idea that online moderators are deleting content randomly and may create a community backlash – this often stems from using volunteer moderators. Professional moderation ensures content is edited/removed according to strict guidelines set by the brand owner (which community members agree to when they sign-up to the site) and often protects both the user and the brand owner from un-witting mistakes like Contempt of Court, Copyright, Defamation etc.
As with pre-moderation the right technical tools to make any kind of review and management of submitted content is important to ensure post-moderation is scalable and doesn’t become too costly to consider.