This panel discussion will explore why multilingual safety is essential for inclusive AI governance, addressing emerging risks such as jailbreaking in low-resource languages, and the current lack of culturally grounded benchmarks and evaluation frameworks. Participants will examine the challenges of building culturally sensitive safety datasets and ensuring that AI models can be developed, evaluated, and deployed safely across diverse linguistic and socio-cultural contexts, and highlight opportunities for collaborative research and open evaluation infrastructure.