
Teens at YouthAI advocate for Gen Z as world enters age of AI
How can young people ensure that the interests of humanity are protected as artificial intelligence evolves? High school senior Saheb Gulati sought to answer this question when he and UC Berkeley student Jason Hausenloy founded the Center for Youth and AI, YouthAI for short, in Feb. 2024.
At the Center for Youth and AI, Gulati and his team are advocating for young people in the age of AI. While large corporations accumulate capital and push forward with AI, lawmakers are sitting tight and schools are rejecting the implementation of AI into the learning process. That leaves youth unprepared to enter a rapidly shifting job market and unheard in a conversation that could determine the direction of their careers.
The Beginning of YouthAI
“We founded the Center for Youth and AI because we were afraid of sleepwalking into another tech-related disaster,” Gulati said, referring to the advent of social media. “Society is unprepared for AI systems that will transform our education, careers, and lives. We wanted to change that.”
In Jun. 2022, Gulati and Hausenloy launched Pivotal Essay Contest, a high school essay competition on global issues partnered with Oxford University’s Global Priorities Institute. Perusing through thousands of student submissions from across the world, they were struck by the content in many essays regarding AI.
“It showed us that young people were thinking critically about AI’s risks and opportunities, but their voices weren’t reaching policymakers,” said Gulati.
When OpenAI’s GPT-4 was released in Mar. 2023, a wave of buzz built up around AI, and it has only grown since. With 400 million weekly users, OpenAI went from a $157 billion to $300 billion valuation between the beginning of 2024 and the first quarter of 2025. 38 AI companies in the S&P 500 now yield $23.8 trillion in market cap while the rest of the index combined accounts for less than half a trillion dollars more.
“AI has already shaped our lives in ways we don’t always recognize—from automated resume screeners determining job opportunities to social media algorithms influencing self-perception…and these models are the worst they’ll ever be. Everyone I know has had a different moment where they started to take AI seriously – reading about AIs designing new chemical weapons, prompting GPT-4 to generate funny poems, or generating images on Midjourney,” said Gulati.
These sudden (and sometimes frightening) developments were a call to action for Gulati and Hausenloy, who wanted to see youth be involved in AI research and advocacy. Many of their concerns come from the inactivity of governments and schools in responding to the AI boom.
The Problem of Unrestrained AI Development
The U.S. federal government has not yet enacted any major laws or regulations regarding AI. Corporations are blazing ahead, implementing the technology into their workspaces and investing billions in agentic AI. Gulati was bothered by the fact that young people, who will ultimately be most affected by AI in their careers, have not been represented in discussions about how the technology is used and regulated.
On both the business and policy sides, those with influence are much older: the average Fortune 500 CEO is nearly 60 years old, and the average Congressman is 58 years old. Gen Z, then, has to make an intentional effort if they want to have a say in how AI affects them.
“We saw history repeating itself. Just as inaction on climate change led to today’s crisis, we feared that society was failing to get ahead of AI’s risks,” said Gulati. “We knew that AI’s impact would disproportionately harm those without existing capital or resources—especially young people entering the workforce, where automation is cutting off traditional entry-level opportunities.”
In order to better protect youth from these threats, Gulati believes the government should establish standard regulations that companies experimenting with AI and developing new tools are required to comply with.
“Just as drug companies must prove safety before giving medicine to patients, AI companies should demonstrate their systems are safe and compliant with existing regulation before release.”
Last fall, the California State Legislature passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), intended to protect whistleblowers at AI companies and mandate risk assessments of high-cost models. The bill passed but was vetoed by Governor Gavin Newsom after OpenAI, Meta, Andreessen Horowitz, and other tech and venture capital giants expressed their opposition to the bill. The law, which would have applied to all AI companies in the state, would have been a major milestone in early AI legislation and risk mitigation.
Lack of action regarding AI goes beyond the policymaking and regulatory level. The evolution of AI has not shown any signs of stopping, and the majority of youth are unprepared for its effects. Not only have major companies been allowed to develop their AI tools without restriction, but the United States education system has not, at large, adapted to these changes.
“ Educational institutions are often struggling to keep pace, creating an uneven landscape where some students receive robust AI literacy while others receive little guidance. This digital divide threatens to worsen existing inequalities, as those with better access to AI education and tools will likely have significant advantages,” said YouthAI researcher and Georgia Tech student Cyra Alesha.
What They’re Doing About It

As an early-stage research organization, the first major project at the Center for Youth and AI was a survey, partnered with YouGov, of 1,000 teenagers in the U.S. The poll asked American youth about their usage of AI and their concerns with the technology. The results said that four-fifths of the teenagers support legislation on AI risk and almost 60% are worried about AI-generated misinformation.
Under their three pillars, “representing, preparing and protecting young people,” YouthAI is on a mission to amplify the voices of teens. Their poll report has already been publicized by mainstream media, including Fortune and TIME.
YouthAI is setting its focus on AI research for educational purposes. Gulati is devoted to direct engagement with youth, setting out to empower teens and spread awareness for the issues he’s tackling. The team is “synthesizing” research findings in hope of giving Gen Z a platform to access, understand, and have a say in these developments.
“Currently, we are building two research-based visualizations: ‘The Young Person’s Career Guide in an Age of AI’ and ‘You and AI? Imagining the Future of Human-AI Interaction’ to increase engagement and accessibility in AI information for youths,” said Alesha.
Each member of the YouthAI team approaches AI-related issues differently, following different paths to reaching equitable solutions. Hausenloy specializes in economics and public policy, while Gulati has experience in advocacy and building youth talent programs. The two, and the rest of their team, bring a fresh, research-minded approach to these modern problems.
“Our perspectives align in our belief that young people should be at the forefront of AI discussions, not just passive recipients of decisions made by older generations,” said Gulati.