Women in AI: UC Berkeley's Brandi Nonneck says investors should insist on responsible AI practices | Tech Crunch

Spread the love


To give AI-focused women academics and others their well-deserved and overdue time, TechCrunch is launching a series of interviews focusing on the amazing women who have contributed to the AI ​​revolution. As the AI ​​boom continues we'll publish several pieces throughout the year, highlighting key work that often goes unnoticed. Read more profiles here.

Brandy Nonnecke is the founding director of the CITRIS Policy Lab, headquartered at UC Berkeley, which supports interdisciplinary research to address questions related to the role of regulation in promoting innovation. Nonnecke is also co-director of the Berkeley Center for Law and Technology, where she leads projects on AI, platforms and society, and the UC Berkeley AI Policy Hub, an initiative to train researchers to develop effective AI governance and policy frameworks.

In his spare time, Nonnecke hosts a video and podcast series, TecHype, that analyzes emerging technology policies, regulations and laws, provides insights on benefits and risks, and identifies strategies for making better use of technology.

Questions and Answers

Briefly, how did you get your start in AI? What drew you to the field?

I have been working on responsible AI governance for nearly a decade. My training in technology, public policy and their intersection with social impacts drew me to this field. AI is already having a pervasive and profound impact on our lives – for better and for worse. It is more important to me to meaningfully contribute to society's ability to use this technology for good than to sidestep it.

What work (in the AI ​​field) are you most proud of?

I'm really proud of two things we've accomplished. First, the University of California is the first university to establish responsible AI principles and a governance structure to better ensure the responsible procurement and use of AI. We take seriously our commitment to serve people responsibly. I have the honor of co-chairing the UC Presidential Working Group on AI and its subsequent Permanent AI Council. In these roles, I was able to gain first-hand experience of how to best implement our responsible AI principles to protect our faculty, staff, students, and the broader communities we serve. Second, I think it's critical that people understand emerging technologies and their true benefits and risks. We launched TechHype, a video and podcast series that demystifies emerging technologies and provides guidance on effective technology and policy interventions.

How do you navigate the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?

Be curious, persevering and impervious to imposter syndrome. I find it critical to seek out mentors who support diversity and inclusion, and to offer the same support to others entering the field. Building inclusive communities in technology is a powerful way to share experiences, advice and encouragement.

What advice would you give to women who want to enter the AI ​​field?

For women entering the field of AI, my advice is threefold: AI is a rapidly evolving field so relentlessly seek knowledge. Embrace networking, connections open doors to opportunities and provide invaluable support. Advocate for yourself and others because your voice is essential in creating an inclusive, equitable future for AI. Remember, your unique perspectives and experiences will enrich the field and spur innovation.

What are some of the most important issues facing AI as it develops?

I believe one of the most important issues facing AI as it evolves is not getting caught up in the latest hype cycles. We're seeing this now with generative AI. Certainly, generative AI will deliver significant advances and have tremendous impact – good and bad. But there are other forms of machine learning in use today that are secretly making decisions that directly affect everyone's ability to exercise their rights. Rather than focusing on the latest marvels of machine learning, it is more important to focus on how and where machine learning is being applied, regardless of its technical expertise.

What are some issues AI users should be aware of?

AI users should be aware of issues related to data privacy and security, the potential for bias in AI decision-making and how AI systems work, and the importance of transparency in decision-making. Understanding these issues can empower consumers to demand more accountable and equitable AI systems.

What's the best way to build AI responsibly?

Building AI responsibly involves integrating ethical considerations into every stage of development and deployment. This includes diverse stakeholder engagement, transparent practices, bias management strategies and ongoing impact assessments. Prioritizing the public interest and developing AI technologies with human rights, fairness and integrity are fundamental.

How can investors better push for responsible AI?

This is a very important question! For a long time we have not clearly discussed the role of investors. I can't tell you how impactful investors are! I believe the “regulation stifles innovation” trope is overused and often untrue. Instead, I strongly believe that smaller organizations can experience a late-mover advantage and learn from larger AI companies that are developing responsible AI practices and emerging guidelines from academia, civil society, and government. Investors have the power to shape industry direction by making responsible AI practices a key factor in their investment decisions. These include helping to address societal challenges through AI, promoting diversity and inclusion in the AI ​​workforce, and advocating for strong governance and technology strategies that help AI technologies benefit society.



Source link

Leave a Comment