Generative Artificial Intelligence (GAI) has rapidly emerged as a transformative technology capable of autonomously creating human-like content across domains such as text, images, code, and media. While GAI offers significant benefits in fields like education, healthcare, and creative industries, it also introduces complex ethical challenges. This study aims to systematically review and synthesize the ethical landscape of GAI by analyzing 112 peer-reviewed journal articles published between 2021 and 2025. Using a Systematic Literature Review (SLR) methodology, the study identifies five primary ethical challenges—bias and discrimination, misinformation and deepfakes, data privacy violations, intellectual property issues, and accountability and explainability. In addition, it highlights emerging opportunities for ethical innovation, such as responsible design, inclusive governance, and interdisciplinary collaboration. The findings reveal a fragmented research landscape with limited empirical validation and inconsistent ethical frameworks. This review contributes to the field by mapping cross-sectoral patterns, identifying critical research gaps, and offering practical directions for researchers, developers, and policymakers to promote the responsible development of generative AI. |