Content moderation is a vital condition that online platforms must facilitate, according to the law, to create suitable online environments for their users. By the law, we mean national or European laws that require the removal of content by online platforms, such as EU Regulation 2021/784, which addresses the dissemination of terrorist content online. Content moderation required by these national or European laws, summarised here as ‘the law’, is different from the moderation of pieces of content that is not directly required by law but instead is conducted voluntarily by the platforms. New regulatory requests create an additional layer of complexity of legal grounds for the moderation of content and are relevant to platforms’ daily decisions. The decisions made are either grounded in reasons stemming from different sources of law, such as international or national provisions, or can be based on contractual grounds, such as the platform's Terms of Service and Community Standards. However, how to empirically measure these essential aspects of content moderation remains unclear. Therefore, we ask the following research question: How do online platforms interpret the law when they moderate online content?