Although people have become increasingly reliant on social media for information, misinformation, especially medical misinformation, runs rampant. Companies like Facebook and Twitter have been given discretion on how they maintain their platforms, but the effect is less than desirable. Under their structures, engagement-driven information takes priority over factuality, contributing to a pandemic of misinformation. Despite this influence, social media platforms face no penalties for how users are affected. This is because Section 230 of the Communications Decency Act (CDA) protects interactive computer services like social media platforms from liability for misinformation created by users. During the COVID-19 pandemic, these issues were highlighted when false social media information hurt coordination between government and citizens and increased personal health risks. To protect general health, Section 230 needs to be amended to exempt no-liability protection from user-generated health misinformation, without precautions or user consent. Compliance with this law would require interactive computer services to label health related information for users and review information directly contradicting scientific consensus as compiled by governmental agencies. By exempting no-liability in medical misinformation, social media platforms and other interactive computer services would face incentives to limit the impact of cognitive biases and the spread of misinformation that harms.