Your Future with AI: Why Young Generations’ Voice Matters in Shaping AI into a Beneficial Companion

Assistant Prof. Başak Ozan Özparlak from the Faculty of Law asserts that artificial intelligence is more than a technological innovation; it is a transformative force redefining the global order through its legal, ethical, and public policy dimensions. Drawing on a historical and conceptual framework that spans from Frankenstein to contemporary AI regulation, she revisits the enduring question of the “responsibility of creators.” By comparing governance models across the European Union, the United States, and China, the article highlights why interdisciplinary research and the contributions of young researchers are essential to building trustworthy, human-centered AI systems.

Over the past three years, we have witnessed an AI Revolution. When a technology becomes mainstream and turns into an integral part of the daily lives of millions, it is no longer merely an object of experimentation. This is what we have been experiencing since November 2022, when ChatGPT was introduced to everyone and became a go-to tool for everything. Today, AI can help doctors diagnose diseases more accurately, enable teachers to personalize education, and assist cities in governing more efficiently.

At the same time, AI can make unfair and biased decisions, exclude vulnerable groups from education or employment, generate inaccurate results, such as flawed financial advice, invade privacy, or be hacked by bad actors. This is why AI ethics and, where necessary, regulation are so important, and why the involvement of young researchers like you is essential.

Why AI Ethics and Governance Should Matter to You

AI is no longer science fiction. It is already embedded in your phone’s face recognition, your social media feeds, and job application screening systems. By 2030, when next-generation wireless networks are fully AI-native, there will be virtually no AI-free areas in our lives. This is why the decisions we make today about how to develop, deploy, and govern AI will define our near future. Understanding how to govern AI responsibly is no longer the responsibility of computer scientists or lawyers alone. It is the shared responsibility of everyone who lives in an AI-powered world.

Lessons from History: Frankenstein’s Warning

For Mary Shelley, the author of Frankenstein: A Modern Prometheus, one of the most pressing questions in science was the responsibility of scientists for the consequences of their scientific experiments. Frankenstein is, therefore, not only the first science fiction novel, but also a political and philosophical work that remains highly relevant today.

Shelley wrote her novel in the 19th century, a period of rapid industrial mechanization that fueled fears of unemployment and gave rise to anti-machine protests, most notably “Luddite movements.” The novel warns that excluding ethics and human considerations from science will ultimately lead to the failure of scientific experiments.

As the creator of the creature, Victor fails in his parental responsibility, proving incapable of loving or guiding the being he has brought into existence. The story portrays a scientist who creates life but abandons his creation, ultimately setting in motion a series of tragic consequences.

Shelley’s message was clear: Creators are responsible for what they make. Today, this lesson is more relevant than ever. When companies create or deploy AI systems/models, their role should extend far beyond making profit. They should monitor, maintain, fix, and report any problems that may arise. Responsible innovation requires standing behind one’s creations. This is where ethics and the law come into play.

Diverse teams in AI development are vital to achieving fairer, more beneficial outcomes. This is where we need you. As the young generation, we trust that the novel perspectives and innovative ideas you bring can help prevent AI creators from repeating the mistakes of the past.

But what about governance diversity? Is diversity good, or should we aim for a global alignment in governance for responsible and trustworthy AI? Even if we agree that diversity is valuable, a practical political, economic, and legal alignment will still be needed. Today, there are three main approaches around the world that define AI governance: the U.S., the EU, and China. Let’s have a quick look at each of them:

Three Different Approaches to AI Governance

1. EU AI Act: Risk-Based Approach

The EU AI Act is the world’s first and most comprehensive regulation on AI. Yet it is not a definitive solution to all legal questions, particularly regarding liability when AI systems cause harm. Instead, the EU AI Act adopts a risk-based approach, classifying AI systems according to the risks they pose to individuals and society.

Applications deemed to present an “unacceptable risk,” such as emotion recognition systems in educational contexts, are prohibited. High-risk systems, including those used in hiring or education, must comply with strict requirements concerning transparency, human oversight, documentation, and continuous evaluation. Meanwhile, the EU is currently exploring amendments aimed at simplifying and streamlining certain aspects of the AI Act to make compliance more practical for developers and organizations.

2. United States: Innovation-Focused Framework

Currently, the U.S. takes a more fragmented approach to AI governance. Rather than a comprehensive federal framework, regulation largely occurs at the state level, with individual states addressing specific sectors.

California, for example, has taken a leading role in state-level innovation, recently becoming the first state to regulate AI chatbots by introducing requirements such as safety protocols and age verification.

At the federal level, there is still no overarching AI statute. However, initiatives such as the Genesis Mission, announced in November 2025, signal a move toward greater federal coordination. The initiative emphasizes strengthening American leadership in AI while maintaining a comparatively lighter regulatory approach than that of the European Union.

3. China: State-Led Development

China pursues AI leadership through coordinated state investment and regulation aligned with national strategic objectives. Chinese regulations emphasize content moderation, social stability, and integration with broader governance systems. This approach prioritizes collective order and state authority as well as the protection of minors and algorithmic transparency.

Critical Challenges You Could Research

AI systems are probabilistic, not perfect. They can generate responses that sound plausible yet they could be inaccurate. Even ChatGPT’s terms of use advise users to verify outputs with human review.

This raises fundamental questions: How do we measure and communicate AI uncertainty? How should accountability mechanisms be designed to address situations in which AI systems cause harm or make mistakes? Addressing these challenges requires interdisciplinary research that combines computer science, law, and ethics.

Your Role in Shaping AI’s Future

As Stanford AI researcher Fei-Fei Li reminds us: “AI is promising nothing. It is people who are promising or not promising. AI is a piece of software. It is made by people, deployed by people, and governed by people.” This means you, regardless of your major, can contribute to building trustworthy AI systems.

If you are interested in computer science, you could research bias detection or security vulnerabilities to develop technical solutions. If you are drawn to law, you could analyze regulatory frameworks, propose new governance mechanisms, or study international cooperation. If you are passionate about psychology, you could study human-AI interaction and automation bias. The field needs diverse perspectives and interdisciplinary collaboration to address its complex challenges.

Ready to dive in? Begin by identifying a specific question that excites you from the areas discussed above. Start reading current research, review the EU AI Act, study recent papers from organizations like Anthropic or Stanford HAI, and follow the latest developments in AI regulations. Adopt an interdisciplinary mindset: The best AI governance research draws insights from a wide array of fields, including computer science, law, ethics, psychology, and sociology.

Consider the real-world impact of your work: How could your research help make AI systems fairer, safer, or more accountable? Connect with mentors by seeking out professors working on AI ethics, policy, or security. Many universities now have AI research centers or initiatives that welcome student involvement. Do not be intimidated if you are just starting, every expert was once a beginner, and fresh perspectives often lead to the most innovative solutions.

Conclusion: The Stakes Are High

Will AI enhance human capabilities while respecting human dignity? Will it distribute benefits equitably or concentrate power? Will it protect privacy and enable democratic participation? Technology alone cannot answer these questions. They require people like you whose curiosity, diverse perspectives, and dedication can help build a better, fairer future.

As the philosopher Plutarch wrote over 2,000 years ago: “The mind is not a vessel to be filled but a fire to be kindled.” Your research can spark the conversations and innovations needed to ensure AI serves humanity’s highest aspirations. The future of AI governance needs your voice, your questions, and your dedication to the hard work of building trustworthy systems. The technology is advancing rapidly, the challenges are complex, and the opportunities for meaningful contribution are greater than ever.

Are you ready to contribute? Join us this summer for AI research programs at Özyegin University.

Özyeğin University

Established in 2007 by the Hüsnü M. Özyeğin Foundation, Özyeğin University is an entrepreneurial research university focused on global impact, student development, and academic excellence.

About

As an entrepreneurial research university focused on the transformation of our students and the excellence of our education and university experience; our mission is to raise leaders in their fields and create a global scientific, social, and economic impact by producing, sharing, and applying solution-oriented knowledge with high added value.

Quick Links

Latest Posts

Quick Links

Contact Information