share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
[AUDIO LOGO] KRAIG KAYSER: Well, good morning, everyone, and welcome to our 73rd annual joint meeting-- oh my gosh-- annual joint meeting of the Board of Trustees and the Cornell University Council. More than 500 trustees, council members, and guests have made this annual pilgrimage to Ithaca from as far away as Singapore, and an additional several hundred are joining us on Zoom. And welcome to you all.
This morning, it is my honor to welcome a few special guests, my predecessor as chair of the board, Bob Harrison. Bob.
[APPLAUSE]
Dr. Robert Harrington, the brand-new dean of Weill Cornell Medicine and Cornell's provost for Medical Affairs.
[APPLAUSE]
Well, thank you for joining Cornell in this vitally important role. Now, this morning's program consists of two parts. The first includes this joint annual meeting and the state of the university address, and the second part is a keynote program moderated by Kavita Bala, dean of the Cornell Ann S. Bowers College of Computing and Information Sciences.
Now, this weekend is typically a celebratory one, a chance to highlight the university's successes, recapping our accomplishments and looking forward to next year's agenda. But we find ourselves grappling with the weight of the recent terrorist attack by Hamas that's shaken not only the global community, but has affected the Cornell community deeply.
And I have heard from several of you in this room and from around the world as they've shared their outrage and anguish at the brutal, indiscriminate attacks on innocents of all ages. And I can only express my personal sorrow to those of you that have been directly impacted by the initial violence and the tremendous loss of life and the war that has naturally followed.
With the Cornell community of over 250,000 alumni in over 160 countries and regions around the world, we don't all share the same backgrounds or belief systems. But we do share a rich tradition of fostering peace and understanding and empathy, which is why last week's attacks were so heartbreaking.
But we also have a rich tradition, going back to our founding, of welcoming students from all over the world, who come together and spend the next four years of their lives on a campus on a hill in upstate New York. In fact, our very first class of 400 undergraduates in 1868 had five international students. So it's particularly apt that one of the three aims in the Cornell University mission statement is to educate the next generation of global citizens, a mission that perhaps has never been more important.
On a separate note, I would like to take a moment to talk about someone, a Cornellian who truly embodied fostering peace, understanding and empathy, Chuck Feeney, who died last week at the age of 92, proud member of the super class of 1956. Likely this university's single greatest benefactor since Ezra Cornell himself. His philanthropic support, given through decades through the Atlantic Philanthropies, and for a long time delivered completely anonymously, transformed Cornell across the university, including the founding gift for the first phase of the Cornell Tech campus.
Now, Chuck embraced a giving while living ethos that led him to spend nearly all of his wealth on numerous global causes, from human rights to health care to education, for over 40 years. And one vital role that Chuck played was as a peacemaker. Through Atlantic Philanthropies, Chuck was instrumental in facilitating and achieving peace initiatives and reconciliation in Northern Ireland.
These efforts ultimately led to the Belfast Good Friday Agreement in 1998 that officially ended decades of conflict. And he helped implement the resulting peace initiative agreement, in particular promoting human rights and justice, and as always, expanding and growing higher education. Chuck, like so many Cornellians, knew that support of higher education and broadening access to it was the key for future peaceful and prosperous societies.
Now, here at Cornell, Chuck's support revitalized undergraduate student life, enhanced and enlivened sciences, humanities and social sciences. And his outsized imprint will be felt for generations to come. And later this afternoon, the board of trustees will pass a memorial resolution honoring Chuck for his many transformative impacts on the university. And on behalf of the entire Cornell community, I offer our sympathy and condolences to Chuck's family and friends.
I recently began my second year as chair. And I have been reminded over and over what an honor it is to serve on this board with so many deeply committed trustees as we grapple with many of the consequential issues and decisions that affect our university.
It's also been a pleasure to work with President Martha Pollack and her team. Martha's [? staminism, ?] optimism, and yes, verve, has been enviable, and her leadership has been inspirational over many crises during her seven years as president. This past year, she was recognized for her leadership and her scholarship by her peers, who elected her to the American Academy of Arts and Sciences. Congratulations, Martha.
[APPLAUSE]
Also in this role, I have had the opportunity to interact directly with many hundreds, if not thousands, of passionate Cornellians. In my travels, I have had the privilege of accompanying President Pollack on several stops of her ongoing tour, where she met over 1,300 alumni at events in a half dozen cities. Now, these events coming out of the pandemic were full of enthusiastic alumni who wanted to catch up with the goings-on at Cornell after three long years.
On one trip that I made with the Cornell team to Asia, I was reminded by the global impact when I met a Korean alum from Seoul. He'd been a hotelier a little more than a decade ago. And as we were walking out, he pulled me aside and asked me if I knew Chuck Feeney. And I said I'd been fortunate enough to have met the gentleman. And he looked at me, he said, he's my inspiration. He's who I want to be.
That example of someone wanting to truly give back is evident across this Cornell community, as alumni give back in so many ways, in treasure and time and talent. And I believe this desire comes from a particularly close lifelong connection that so many Cornellians feel with their alma mater and with each other. And I believe the seeds of this enduring bond were planted from the very start.
Right in our university charter, it stipulates that as soon as Cornell University had 100 alumni, they could elect a representative to sit on our board. Now, this provision was rare at the time. Only Yale had a similar one. And Cornell's first alumni-elected trustee began in 1874, soon after we passed that 100-graduate mark. So this year actually represents the 150-year anniversary since alumni began electing prominent Cornellians as board representatives, and thereby deeply engaging all of us in the business, concerns, and pride of our university.
And now, I would like to take the next few minutes introducing our six newly elected members of the Board of Trustees, who started their term in July. And I would like to start, appropriately, beginning with our two newest alumni-elected trustees. First, Valisha Graves, class of '85.
[APPLAUSE]
Valisha is a graduate of the College of Arts and Sciences. She's executive director at JPMorgan Chase in Consumer and Community Banking. And Valisha pretty much at Cornell has volunteered for everything. She is currently chair of the President's Council of Cornell Women. She's a council member, serves on the WE Cornell Advisory Board, Cornell Black Alumni Association, and several other groups. Frankly, too many to list. Welcome. Thank you.
[APPLAUSE]
Next, John Toohey-Morales, class of '84. John, where are you?
[APPLAUSE]
Now, those of you from South Florida may know John from TV. He's John Morales, hurricane specialist.
[LAUGHTER]
And the CALS graduate is an atmospheric and environmental scientist who has been the chief meteorologist at Channel Six in Miami, and often has found himself on the front lines of several severe weather events, like the devastating storm last year, Hurricane Ian. And this gives John a keen understanding of the real-world implications of climate change. And I welcome his important and knowledgeable voice on the board for this and other sustainability-related issues. And he's also been a champion for diversity, bringing many minority students into the field of meteorology.
At Cornell, John has received the CALS Outstanding Alumni Award in 2022 and serves on the Atkinson Center's External Advisory Board, and is on council and also the president of his class of 1984. So John, I hope as you join the Cornell board, you are particularly adept at explaining the quirks of weather here in Ithaca, as you are in Miami.
[APPLAUSE]
I would now like to introduce our newest undergraduate student-elected trustee, John "JP" Paul Swenson, class of '25.
[APPLAUSE]
JP is in the back. He joins a long line of distinguished student-elected trustees beginning in 1971, including our own Bob Harrison. JP is majoring in industrial and labor relations, and is also pursuing minors in both business and public policy.
He's been a working student, including at the store at the Robert Purcell Community Center. As a sophomore, he served as the undesignated representative at large for the student assembly. He's also involved with the Pi Sigma Epsilon business fraternity and writes for the Cornell Workplace Review. Welcome.
[APPLAUSE]
Now on to our three board-elected trustees. First, we have Jennifer Davis, class of '99, a Dyson School grad.
[APPLAUSE]
Jennifer is a partner at Bain Capital's North American private equity team, and spent 19 years before that at Goldman Sachs, serving as a partner in several leadership roles, including co-head of recruiting for Cornell and mentoring students entering their financial careers. And at Cornell, she's been very active in several Cornell SC Johnson College committees and councils, and also was a Dyson School leader in residence and has established a scholarship there. So welcome and thank you, Jennifer.
[APPLAUSE]
Next is Paul Rubacha, class of 1972, CALS graduate, and went on to earn his MBA in 1973.
[APPLAUSE]
Paul is principal and CEO of Ashley Capital, one of the largest privately held industrial real estate investment companies in the country, and has more than 30 years of entrepreneurial experience in real estate. For Cornell, Paul has held volunteer positions for over three decades, from terms on the council to co-chair of the Board of Trustees Real Estate Subcommittee, longstanding member of the College of Art, Architecture, Art and Planning Advisory Council, the Baker Real Estate Advisory Board, and has greatly enhanced the university's real estate program by establishing the Paul Rubacha Department of Real Estate in 2022. Welcome to the board, Paul.
[APPLAUSE]
And finally, K. Lisa Yang, class of '74, an ILR grad school-- or ILR School graduate. Where are you, Lisa?
[APPLAUSE]
Lisa had a successful career in investment banking in both New York and Singapore, and now devotes most of her time to philanthropy and investment management. She's made numerous significant philanthropic investments in biomedical research related to autism and brain disorders, was the lead donor in establishing the K. Lisa Yang and Hock E. Tan Employment and Disability Institute at Cornell.
And she also has a passionate interest in wildlife and conservation, and established a center for conservation for bioacoustics at the Cornell Lab of Ornithology. At Cornell, she's a life member of council, a member of the ILR Dean's Advisory Council and the Lab of Ornithology's administrative board. Thank you, Lisa.
[APPLAUSE]
I would now like to introduce-- I would now like to introduce the new members of the Cornell University Council. And unfortunately, we don't have time to introduce everybody individually. So if you would please stand and be recognized, new members of the council.
[APPLAUSE]
Well, thank you for your leadership, your commitment to the university, and for acting as our most important ambassadors.
A little bit later this morning, you will hear a fascinating keynote presentation, a panel on the discussion of the impact of artificial intelligence on free expression. And here at Cornell, we're well-equipped with one of the top three programs in the country to grapple with AI's potential, as well as its challenges at this moment in time.
But now, it's my pleasure to introduce Arturo Carrillo, the chair of the Cornell University Council. Arturo received his bachelor of science at the College of Engineering in 1996, his master's in engineering in 1997. He lives in Dallas, Texas, with his wife, Jamie, also a member of the class of 1996 and also a council member. And I believe your son is here, as well. So welcome.
[APPLAUSE]
ARTURO CARRILLO: Trustees, council members, parents, faculty, and staff, today, I will provide you with an update of what Cornell University Council is doing and where we are headed. We continue to work on the campaign to do the greatest good.
This year, we will continue our theme of engines of engagement, using our knowledge and expertise to work in our respective voluntary communities to expand, propel, and solidify our connection with Cornell and with each other.
Our engagement committee is developing communication tools to support council members, including the ambassador toolkit, and also expanding our knowledge of Cornell through various learning opportunities. They will continue to help us reach out to other communities to engage all Cornellians. Thank you, Terri Denison, for leading this committee.
[APPLAUSE]
Our mentoring committee is well on their way onboarding new members. We're pleased to have 61 new members that we just introduced who are matched with other council members, providing guidance and knowledge. This committee is also helping those who are on hiatus to continue to stay engaged with the university through our continuous engagement mentors.
This is a key initiative. We want people to stay engaged with Cornell once they reach kind of the pinnacle organization and are ready to move on. If you want to help us with this outreach, please let us know. Greg [? Hart ?] is leading this committee. Thank you, Greg.
[APPLAUSE]
This year, our development committee is aiming to increase our 90% giving participation rate of our regular members to 100%. And it's challenging our life members to have similar or better statistics. Thank you, [? Stephen ?] [? Monk, ?] for your efforts.
[APPLAUSE]
Danielle Contreras is leading the diversity, equity, inclusion, and belonging committee. We are examining the way we do things are many methods with a practice to have an eye and ensure that we have more inclusion and we help foster new relationships. And people have a longer term with us from all sorts of communities. We hope the session yesterday helped reach those goals.
[APPLAUSE]
The membership committee has a hard task of selecting those that are offered membership to Council out of hundreds of qualified candidates. Last year, they reviewed over 200,000-- 200 nominations.
[LAUGHTER]
To be clear. And selected 133 new and returning members to join Council. And also 25 members were invited to become life members. Congratulations to all of you.
[APPLAUSE]
But I would be remiss if I didn't remind everyone that we are taking nominations to Council. So we're doing this October 31. So get those nominations in now. Thank you, Jill Fields, for chairing this committee.
[APPLAUSE]
All this work would not be possible without also the help of our staff and our vice chairs. Thank you.
[APPLAUSE]
As I mentioned earlier, we continue with our theme of engines of engagement. Last year in this forum, I mentioned that the definition of an engine is something that transforms a force into motion. We asked our volunteers to convince other alumni to transform their connection with Cornell into engagement with Cornell.
We recently did a poll and we realized that, in Council, over 50% of volunteers do what they do because they believe Cornell made them what they are today. But the key thing is that who we are today is what we do today, and that is to support Cornell to do the greatest good.
Our engagement will help provide our faculty with the opportunity to continue to do research that pushes the frontiers of science, educates minds, and prepares the leaders for New York State, the country, and the world. We are again asking our Council members to be the engines that will help reach our campaign goal of 200,000. 200,000. This is the right one.
[LAUGHTER]
Alumni engagement through our various committees. We want to be the catalytic agent that continues to be our finest engines of engagement. Thank you.
[APPLAUSE]
Now, I would like to introduce President Martha Pollock back up on the stage for her State of the University address.
[APPLAUSE]
MARTHA POLLOCK: Good morning, everybody. Before I begin my comments, I want, like Craig, to acknowledge the horror and the pain of the current moment. The atrocities perpetuated by the Hamas terrorist organization in Israel have left the world reeling with shock, horror, anger, and grief. The brutal attacks have shattered countless innocent lives and challenged our very understanding of humanity. And along with the senior leadership of the Cornell Board of Trustees, I stand here to, once again, condemn terrorism in the strongest possible terms.
[APPLAUSE]
I also want to acknowledge the extraordinary pain of all innocent people who are now suffering. Israelis, Palestinians, and others with ties to the region. As I said earlier this week, I'm a grandmother. And my heart absolutely breaks for all the babies, all the children who are caught up in this violence. We've also watched with distress the increasing acts of violence directed at Jews and Muslims here in the United States.
And here, at Cornell, our community feels a great deal of pain, anger, and fear. I understand. We live in a divided world. But I know that this community, our Cornell community, can come together in difficult times and stand as we always have against hatred of all forms. So today, I ask all Cornellians to offer compassion and empathy and to provide one another with the support that we all so need at this moment.
[APPLAUSE]
We're an academic community, and we're connected not only by our collective humanity, but by a set of core values that characterize what it means to be a Cornellian. Five years ago, we undertook as a community to put into words what it is that distinguishes us.
And today, I think it is more important than ever that we reflect on the values that define our ethos, purposeful discovery, free and open inquiry and expression, a community of belonging, exploration across boundaries, changing lives through public engagement, and respect for the natural environment.
Our core values are a reflection not just of our past and our present, but of our potential. They describe who we are and what we aspire to be. And so what I'd like to do today is place the achievements of the past year into the context of those core values, showing you just some of the many ways that we're working to be the university that Ezra Cornell imagined, but now, reimagined for the 21st century.
It begins, of course with purposeful discovery. We're an academic institution and our excellence rests on our academic distinction. On the work of our faculty and our students to expand the boundaries of human knowledge and to deepen our understanding and our appreciation of all of our world in all of its beauty and its complexity.
We do that through the work of faculty like Siddharth Sahani, Assistant Professor of Mechanical and Aerospace Engineering, who just last month was selected for a 2023 NASA Early Career Faculty Award. Supporting her work using machine learning and novel ionic liquids to develop thermally stable, low viscosity, high performance heat transfer fluids. Necessary--
[LAUGHTER]
Yes, I'll say that again for you. Thermally stable, low viscosity, high performance heat transfer fluids.
[APPLAUSE]
Necessary to help spacecraft's thermal control systems work smoothly even in extreme heat and cold. We do it through the work of Cornell art architecture and Planning Professor Sara Bronin, a leading expert on historic preservation law and land use, who received United States Senate confirmation in December as chair of the National Advisory Council on Historic Preservation, helping to ensure that the rich history of our nation is protected and celebrated in ways that will bring education, continuity, and a sense of community to future generations of Americans.
And Sasha Rush, Associate Professor of Computer Science at Cornell Tech, whose work to make generative AI systems safer and easier to use, has been recognized with an NSF Career Award and a Sloan Fellowship. Rush works on natural language processing, specifically the kinds of AI that generate text, the incredibly useful applications that translate languages, summarize documents and data sets, and answer to, Hey, Siri.
[LAUGHTER]
Rush is part of our university-wide AI initiative, which connects all of the innovative and visionary work being done across Cornell to shape a future in which human-centered ethical AI benefits our lives, our society, and our planet, helping us to do things like predict and prevent heart failure in cardiac patients, make the world more accessible to people with disabilities, ensure the fairness of systems that recommend job candidates, and farm more productively in sustainability.
And of course, we're paying close attention to the utility and impact, current and potential, of AI on our campuses with new guidelines for incorporating generative AI into our teaching, active exploration of AI applications in our operations, and guidance for our community about the opportunities, limitations, and risks of AI tools.
Purposeful discovery also drives the 84 graduate students who were selected as National Science Foundation Research Fellows this year, comprising the largest group of new Fellows Cornell has ever fielded in one year, and representing more than 4% of all the NSF fellowships awarded nationwide. Yes, 1 in every 25 of all of this year's NSF research Fellows are here at Cornell.
[APPLAUSE]
The kind of purposeful discovery that we prize at Cornell is only possible because of our next value, free and open inquiry and expression. Without the right of our faculty and students to freely explore all ideas in the search for understanding and truth, we could not fully satisfy our mission of creating new knowledge nor of preparing our students to be global citizens.
So this year, we've chosen to celebrate and to explore that freedom as a community through our university-wide theme year, the indispensable condition, freedom of expression, at Cornell. Our goals are both to deepen understanding of the issues surrounding free expression and to provide opportunities to develop the skills essential for a civil society such as active listening and leading controversial discussions and effective advocacy.
We're pursuing those goals in ways that reflect the breadth and the depth and the excellence of our academic community. With invited speakers and panels exploring the foundations of First Amendment law, with exhibits that explore how clothing, art, and appearance can function as symbolic speech and expressive conduct, and with the new John W. Nixon class of '53 Distinguished Policy Fellows program, a part of the Brooks School of Public Policy's Learning Through Difference initiative.
The inaugural Fellows are Democrat Julian Castro, former US Secretary of Housing and Urban Development, and Virginia Republican and former Congressman Tom Davis. Last month, we hosted a sold out production of the opera Scalia Ginsburg-- it was great-- performed in Willard Straight Hall. This opera celebrates the civility and the shared values that enabled a famous judicial friendship to flourish across deeply held difference, and it has the world's first rage aria about constitutional originalism.
[LAUGHTER]
And there are many, many, many, many more activities across all of our campuses. The title of our theme here comes from the words of the late Supreme Court associate Justice Benjamin Cardozo, who called freedom of speech the matrix, the indispensable condition, of nearly every other form of freedom.
And freedom of expression is, indeed, the indispensable condition not only of our academic enterprise, but of our democracy. Yet, it is under attack in this country from across the political spectrum. We're seeing everything from speakers being shouted down to very dangerous laws banning books from libraries and ideas from classrooms.
But it is our responsibility to ensure that our students have the opportunity to engage with ideas that challenge them. Because being exposed to ideas that one disagrees with is a core part of a university education, key to learning how to evaluate information and develop considered beliefs, key to developing intellectual humility, and key to learning how to advocate for one's own deeply held values. That is what we must maintain at Cornell, and that leads to our next value, being a community of belonging.
Cornell was, of course, created, as we all know, as an institution for any person with the understanding that our teaching, our research, and indeed, our society all benefit from a university that welcomes many different kinds of people with many different perspectives and puts them in an environment where they can learn with and from each other.
We honor that foundational commitment to diversity, equity, and inclusion in many ways. For example, our Office of First Generation and Low Income Student Support, which brings together the resources and programs that enable students from less advantaged backgrounds to navigate and thrive at Cornell under the leadership of the Peggy J. Koenigs class of '78 Associate Dean of Students for Student Empowerment Dannemart Pierre. Cornell is the first ivy to establish an endowed position like Pierre's.
The number of first generation and low income students at Cornell continues to grow thanks to the incredible generosity of our alumni and friends. Today, in this campaign, we've raised $360 million for undergraduate affordability toward our three goals of increasing the number of aided students at Cornell by 1,000, while we're increasing our student body by 650, decreasing average student debt at graduation by 25%, and ensuring that every aid eligible student has the opportunity to participate in an academically enriching summer experience without worrying about their summer savings expectation.
We have already been able to add nearly aided students, we've increased grant aid, and thereby, decreased loan aid by an average of $14,000 over academic year 2020. And as of fall 2024, most families with incomes up to $75,000 will receive no loan financial aid packages.
[APPLAUSE]
Cornell's military tradition, including ROTC, adds another important dimension to diversity on our campus. We support active military members and veterans on their paths to Cornell with a dedicated military veteran admissions and enrollment services team. And we also provide resources to our veterans once they're here. For example, through our veterans house, a campus residence that serves to integrate and support all members of Cornell's military community.
I also need to mention our response to last summer's Supreme Court decision regarding race conscious admissions. Although we were deeply disappointed by the ruling, we abide by the law and we have modified our admissions practices accordingly. At the same time, within the bounds of the law, we continue to pursue our mission, seeking to build academically outstanding classes that are broadly diverse.
To advance this goal, we are now implementing the practices recommended by the Cornell Presidential Task Force on Undergraduate Admissions. Everything from partnering with organizations that support high achieving students from economically under-resourced communities to simplifying the transfer of credits from community colleges to streamlining and enhancing our financial aid processes.
Our next value, exploration across boundaries, is fundamental to our ability to address challenges that do not neatly fall into one field of study, which is to say nearly all modern societal challenges. The depth and the breadth of our faculty's expertise across disciplinary boundaries and our willingness to delve into new fields of study fueled innovation across an incredible range of areas.
And they've inspired new departments and programs across and beyond traditional fields, like the new multi-college Paul Rabaka Department of Real Estate, an innovative collaboration between the College of Art, Architecture, and Planning, and the Cornell SC Johnson College of Business designed to advance real estate education and research.
And our campus-wide Master of Public Health program, with a one health approach recognizing the interconnections between people, animals, plants, and planets offering concentrations like emergency preparedness and management and food systems and health. Exploration across boundaries means connecting not just our colleges, but our campuses with collaborations that build on the complementary strengths of our Ithaca campus, Cornell Tech, and Weill Cornell Medicine.
For example, our newly launched department of design tech, which works to advance innovation research and teaching at the intersection of design and emerging technology across our Ithaca and Cornell Tech campuses. And our first inter-campus vaccine symposium, which drew faculty and students from across our campuses to the Veterinary College in August for a two day exploration of the state and future of vaccines, vaccine technology, host immunology, and vaccine policy and communication.
The knowledge and the expertise created at Cornell have a reach far beyond Cornell. As the only land grant university in the Ivey League, we have a mandate to take the work that we do out into the world by changing lives through public engagement. So for example, the students in Professor Max Zhang's class on internet of things spent six weeks learning how to build code and extract data from sensors. They then spend the rest of the semester putting that knowledge to work as part of a National Science Foundation supported project to design and implement a statewide internet of things in New York.
Last year, the students invented devices that monitor blood pressure and send data directly to health care providers, track needed road repairs for the Ithaca Department of Public Works, and notify mutual aid Tompkins when a food pantry needs restocking.
Cornell's engagement reaches even further, out beyond our city, county, and state with programs like the Cornell Keystone Nilgiris Field Learning Program in a very rural region part of-- a very rural region in South Africa where Cornell students paired with local Indian students, work with NGOs and community health workers on issues like mental health and well-being interventions.
Our international engagement is strengthened by the work of our Global Hubs launched last fall, a network of 19 peer institutions in university-wide partnership with Cornell each connecting all of Cornell with their communities, countries, and regions by exchanging students and faculty and facilitating collaborations and research.
Cornell's learning and scholarship, our culture and collaboration, and our land grant drive for engagement combine to give unique strengths to our final value, respect for the natural environment. We continue to move forward in our goal of carbon neutrality on the Ithaca campus by 2035.
I'm very excited about our planned new 110 megawatt solar voltaic project in Batavia, New York. Once it goes online, probably 2025, it will bring us to a critical milestone in our sustainability goals meeting the electricity needs of our Ithaca campus with 100% renewable energy.
[APPLAUSE]
We'll also be using the project to continue developing best practices for sustainable design and operation, such as bifacial panels that capture energy on both sides, solar tracking, and of course, our most cutting edge technology, keeping grass and weeds off the panels of all of our solar farms, solar mowers.
[LAUGHTER]
[APPLAUSE]
We're moving forward with Earth source heat, our ambitious plan to heat the sometimes chilly Ithaca campus using deep geothermal heat. The data we gathered from our borehole last year were promising, and we continue to work to better understand the feasibility of designing and building a fully functioning demonstration project. If successful, the project could have impact in many locations beyond Ithaca.
Of course, solving the crisis of environmental degradation and climate change will require countless solutions to countless challenges. And researchers across Cornell supported by Cornell Atkinson and the 2030 Project are working across all of our colleges and schools to find them from floating solar panels to sustainable alternatives to lawns to new ways of upcycling polyester.
Cornell's staff are key to the work of making our campus a model of sustainability through projects like our annual Cornell Dump and Run, which collects unwanted items from shower rods to sneakers to vacuum cleaners from departing students in May and then sells them to arriving students in August. And through our dining operations, which incorporate sustainability into all of their practices from composting food waste to smart refrigeration technology to centralized oil recovery and refill systems.
And while I'm on the topic of our amazing dining staff, Cornell was ranked number 2 by the Princeton Review for Best campus food this year.
[APPLAUSE]
Wait, there's more. Cornell Dining won the 2023 gold medal for Best Residential Dining Facility from the National Association of College and University Food Services for Morrison dining. Now you can go.
[APPLAUSE]
And last year, I told you that Cornell had become the first university to receive a platinum rating from AASHE, the Association for the Advancement of Sustainability in Higher Education three times in a row. This year, we earned it a fourth time.
[APPLAUSE]
Our success in reaching our ambitions, everything that I've talked about today, is made possible by the ethos that connects our entire community, faculty, students, staff, and alumni. That ethos is what makes us Cornellians. And it is what has driven so many of you to be part of our work to do the greatest good.
Our campaign continues to move forward with tremendous momentum, with our last two years being the best two fundraising years in Cornell's history. Cornell's campaign is primarily a campaign of people, supporting our faculty, our students, and our staff. But part of supporting an academic enterprise is providing the resources and the facilities our community needs to perform at the highest levels.
I'm delighted that we're now moving forward on five tremendously important capital projects. At Weill Cornell Medicine, a new light filled energy efficient 16 story residence will become home to 262 graduate and medical students, nearly doubling available student housing and making a key investment in the education and well-being of our future doctors and scientists.
[APPLAUSE]
Here in Ithaca, Atkinson Hall will become home to the Cornell Atkinson Center for Sustainability, the Department of Computational Biology, the Center for Cancer Biology, and the Center for Immunology, and will provide a home to support campus-wide Master of Public Health programs.
I know that some of you will be going on hard-hat tours of Atkinson Hall this afternoon. We scheduled two and they were booked in about three seconds flat. So enjoy that tour. The new state of the art Ann S. Bowers College of Computing and Information Science building will provide space for the college's exploding-- rapidly exploding rising student enrollment and its ambitions to increase its faculty by 40% to 50% over the next few years.
And McGraw Hall, one of our oldest buildings, is now receiving a deeply deserved and much needed renovation.
[APPLAUSE]
Restoring its structural stability, modernizing its spaces, and adding both seminar classrooms and an active learning classroom. And finally, on the topic of buildings, I am absolutely delighted to announce plans for our Meinig Field House named in honor of former Chairman of the Cornell Board of Trustees Peter C. Meinig, class of '61.
The Meinig Field House will support all of our students with expanded space for physical activity year round on our central campus. And like our new Booth Field dedicated last fall and now home to Cornell's baseball team, the Meinig Field House will add more strength to the Big Red, including our men's lacrosse team, which won its 31 Ivey League Championship last year, the most of any school, and women's field hockey, which has won 10 of their 13 games so far this season.
[APPLAUSE]
Internationally, Cornellians won a gold and a silver at the World Wrestling Championships in Belgrade last month. And we will be represented at the 2024 Olympics in Paris in triathlon and rowing.
[APPLAUSE]
This is the point in my speech where I was planning on wrapping things up. But just when I thought I was done, Cornell alumnus Manuel Munoz, MFA Class of 98, received a MacArthur Genius Grant.
[APPLAUSE]
For what was described in the words of the MacArthur Foundation as his work which renders with empathy and vivid detail the multifaceted lives of Mexican-American communities in California's Central Valley. And then, when I labeled this speech a final, three of our Cornell faculty were elected to the National Academy of Medicine.
[APPLAUSE]
And just when I said, OK, that's it. I'm really done. No more additions, [? ASL ?] came calling about the Nobel Prize in Economic Sciences.
[APPLAUSE]
Awarded to Claudia Goldin, class of '67, for her pioneering work under uncovering key drivers in gender differences in the labor market. There's so much more. I could tell you about the work we're doing in so many ways to move our values and our mission forward with the creativity and the solutions, the knowledge, and the inquiry and the global citizens and leaders our society needs to thrive.
But if I don't stop somewhere, we're never going to get to lunch. So as I do every year, I just want to end with my Thanks to all of you for everything you do to make Cornell a place for any person, any study, where our imagination is matched by our innovation and our ethos by our excellence. Thank you all very, very much.
[APPLAUSE]
Thank you so much. And now, I am absolutely delighted to introduce our keynote program. We have a video to share with you to introduce our remarkable faculty panelists and our tremendously important topic of discussion, artificial intelligence and free expression.
[APPLAUSE]
[BEGIN VIDEO PLAYBACK]
- Artificial Intelligence, AI, is a multidisciplinary field of computer science focused on creating machines or software that can perform tasks that would normally require human intelligence such as understanding natural language, recognizing patterns, solving problems, and making decisions.
- What makes AI, particularly interesting now is, of course, it's driving a lot of innovation very fast. But what people struggle with wrapping their brains around is the pace of innovation itself is accelerating. And so things are changing so fast that society, which takes a little longer to respond to these changes, is not quite catching up with the pace of innovation.
- I think we're starting to see how AI and, in particular, the set of technologies that are collectively known as generative AI, is already starting to impact how we interact with each other.
- Certainly, lots of us have found uses for AI in our everyday communications with one another in being creative, right? There's a lot of potential for allowing people to do things that they didn't have the skills to do before or that technology didn't enable them to do before. It's really exciting, right? It's exciting to see these new technologies open up paths for people to express themselves differently and self-actualize in different ways.
At the same time, I think one of my biggest concerns about especially new generative AI tools is the degree to which they maybe make it difficult for people to understand whether what they're seeing is quote "real".
- What is your perception of reality?
- We find that as AI floods the information ecosystem, people will just throw up their hands and say, I don't even know what to believe. So I'm either not going to believe anything or I'm going to believe the partisan cues that are coming at me of the person I trust.
Now, you're kind of in this nihilistic world where you just won't trust your public elected officials either when they tell you to pay your taxes or listen to public health guidelines.
- What are we going to do to make sure that people really know what happened, right? Or how do people know what kinds of information to trust? The most critical thing will be ensuring that people have some tools for understanding whether they're learning about something that really happened in the world or whether they're seeing something that's generated.
- It's not enough to say that we are comfortable with speech online because we're talking about individuals and there might be some negative consequences, but that's part of living in this society. The scale and scope of what AI could do, I think, is beyond the social media posts or the kind of misinformation that we've seen that already, I think, was sufficiently unsettling for many of us to call for its own regulation.
Saying that in this country we don't historically regulate technology I think is going to be politically and socially infeasible given the scope of the challenges that AI presents.
- Freedom of expression is critical for great science. Galileo, for example, bucked conventional wisdom at that time and fundamentally changed the course of science and humanity as a result of it.
Freedom of expression is also what creative people want. A truly creative person wants to be able to be free to express their creativity in whatever way, in whatever language, in whatever medium they choose to do so.
As a university, it's our job to create the environment where our scientists, our creators can feel that they can thrive and explore the boundaries of new ideas and go where it takes them.
- We are in a critical moment of how AI is used in our society, and in particular, how it relates to free expression. We can see that the world has shifted in a way that we will contribute and create content.
- It raises really important questions about the degree to which that will impact trust in one another, what that will do to the labor market to people who write for a living or who create those images for a living. How it is that we think about information quality or whether we know whether something is real or not? And so there's lots and lots of really important political and social questions that I think we also have to answer in addition to understanding the technical aspects of these tools.
We are not totally sure how the law will regulate artificial intelligence particularly as it applies to speech. Issues of libel, issues of creative expression, issues of speech protection for this content, I think, are going to be really crucial in how we see the evolution of legal analysis and technology in the coming years.
When it comes to free speech protections in the US, up until about the mid-20th century, we weren't thinking about technical issues as much. Then radio, television, those technologies changed the free speech analysis. And then, the internet itself also changed how we think about free speech claims. Now, AI is, I think, another leap ahead of where we were even in the last 10 or 15 years.
- And this will all have significant implications to expression, whether it's a human expression that will become perhaps easier, perhaps more generic. We don't yet know. It will change the ability of disinformation actors to produce content that is misleading or biased or misrepresenting reality in some way.
- We have the 2024 election coming up. How might these tools be misused? And so the Senate Rules Committee, for example, was interested in do they need new rules to accommodate these new technologies? And so these then, fundamentally, are questions of expression and finding the lines between one person's free expression and vulnerable communities on the other hand.
- I truly believe that Cornell is in a unique position to think about and impact how AI is used in the world, in our society, especially as it relates to free expression.
- We're trying to understand from that interdisciplinary perspective how we can harness these technologies for the greatest good, but also, implement them in ethical ways.
- Our students are going to be actively working on these matters as scientists, as technologists, as lawyers, as policy leaders. And so that means that we have to educate them in the classroom to engage with these questions.
- We want to give them the tools, the intellectual knowledge, the skills that they need to go out into the real world. They are here for a short time. And then, they'll spend the rest of their life living in society taking this knowledge and education and having an impact.
- So I think what makes Cornell really unique in the AI research and discovery space is the way we go about answering questions. It's the kind of place where we take a new emerging technology like AI or we take a social problem and we know that one person's expertise is not going to be enough to solve that problem, right? Or one discipline's way of knowing the world.
And we all come together to study problems like AI together. And I think doing that allows us to integrate our different types of expertise. To teach our students in a way that really pulls from lots of different ways of knowing the world. And ultimately, I think that lets us answer more important questions than any one of us could on our own.
[MUSIC PLAYING]
[APPLAUSE]
[END VIDEO PLAYBACK]
SPEAKER 1: It's great to be back here at TCAM and to talk this time about AI but in the context of the team here at Cornell, which is on free expression. As you all know, artificial intelligence is everywhere. So this past decade, we've seen an explosion of AI being used in Siri and Alexa, translating what you say. In your Tesla, understanding where passengers and other cars are on the street. In AI deciding what content you see, whether it's on Facebook or Twitter or X as they call it now, or what movies are recommended to you over in Netflix.
That was the past decade. Then there was this past year where we have seen a new kind of technology. So most of it from the term ChatGPT, but the broader technology name that's used for all of these technologies is generative AI. So AI that can generate content. So it can write in the style of any author. It can write in the style of Shakespeare. It can carry on involved conversations with humans.
You saw in the video here ChatGPT was opining. It was an AI opining on the topic of AI and free expression. Technologies like DALL-E have been used to produce images and video in the style-- in any style that you want. So you can just type a few words and it will produce those images of students sitting in a college campus. But none of them were real images. They were all AI generated.
Then, going further along that line, you see the Deepfakes. The video of Morgan Freeman. He never uttered those words. But it was completely believable to you.
So this tech really allows people to make complex arguments, complex visuals with just a few keystrokes, which is incredibly exciting from the point of view of democratizing content generation. Having everybody, your children, play with this and generate new content. But so great creative freedom, but you can also see that it can create a lot of disinformation. AI can stifle free expression, can change free expression.
We have a panel of world experts here today to discuss this aspect, AI and free expression. So before we get into the topic, though, I'd like each of them to introduce themselves and talk about their interest in the area starting with Gautum.
GAUTUM HANS: Good morning, everyone. My name is Gautum Hans. I teach in the Law School's First Amendment Clinic. My research focuses on technology and free speech issues with a particular interest in the challenges of regulating in technological areas that implicate First Amendment scrutiny.
KAREN LEVY: Hi, everyone. It's a pleasure to be with you today. My name is Karen Levy. I'm an Associate Professor in the Department of Information Science and I'm affiliated at the law school also. I'm a sociologist and a lawyer by training. And my research examines the social, legal, and ethical dimensions of new data intensive technologies, particularly in the context of labor and the workplace.
MOR NAAMAN: Hi, everyone. My name is Mor Naaman. I'm a Professor of Information Science at the Jacobs Technion-Cornell Institute at Cornell Tech. I've been studying the trustworthiness of our information ecosystem, or rather the lack thereof.
[LAUGHTER]
And in the last few years, since 2018, we've been looking at generative AI and its impact potential impact on our communication and media ecosystem.
SARAH KREPS: Good morning, everyone. I'm Sarah Kreps. I am a Professor here with appointments in Government Public Policy in the Brooks School of Public Policy and the law school. And my teaching and research are on the intersection of emerging technology, politics, and national security.
SPEAKER 1: An incredible group of experts here, which is a privilege to moderate this panel. Let's get started. So social media has risen. The impact on society, of course, has been staggering. You all know this social media, of course, connects communities, drives the economy, and is an integral part of most of your lives and a lot of your communication, a lot of communication in the planet.
But it has its downsides. We've talked about this, polarization, misinformation, these are just a couple of examples. There's been a lot of more recent discussion around the impact of social media.
AI, Artificial Intelligence, is used in the context of social media by companies to boost or to suppress the visibility of certain content. So when you boost somebody's content, you're essentially giving them a platform, a platform to their opinions. And when you bury their content, you're de-platforming them. Again, a phrase that has been discussed a lot in recent years.
So in this time, a key question has emerged in how we think about social media in this age. So is a Twitter or Facebook or X the digital versions of the public town square? And if so, how are these companies, which are private companies, and their AI algorithms shaping discourse in the public town square?
So before we get into that, and there's the question of whether that's, in fact, the right model to even use when you reason about these media, it might be useful to first understand what the legal framework is.
So do these companies have an obligation to allow anybody to say anything they want? Do they have an obligation to suppress disinformation? Does the First Amendment even apply to discourse on social media? So I wanted to turn to our panelists to answer some of these questions. I'll start off with Gautam.
GAUTUM HANS: Yes. So many of you may have legal or political science backgrounds and may be familiar with the First Amendment and its protections. The First Amendment protects the free speech and assembly rights of individuals and organizations. And that means that federal, state, and local governments generally have limited rights and limited abilities to interfere with speech.
Now, the First Amendment applies only to government actors. It doesn't apply, generally, to private entities. This is known as the state action doctrine. That also means that the First Amendment almost never restricts private entities like companies. Those entities can usually regulate speech in their own spaces without an individual claiming that a company violated that individual's First Amendment rights.
There are two pending Supreme Court cases that were just granted review a few weeks ago that implicate the free speech rights of social media companies against government regulation. Government interference with private entities and their editorial choices raise First Amendment issues. But the contours of this doctrine and its application to social media companies remains unclear, that's why the Supreme Court granted review.
The government has a limited ability to tell a private-- a private entity, like a newspaper or a social media company, what to publish or carry because public because platforms have their own First Amendment rights to host the content they want. There's a few exceptions to this, for example, images that depict sexual exploitation of minors, the state can prevent companies from hosting those images.
And generally, we want social media companies to be able to moderate their own content. Because if we held them to the same standards that the government is held to under the First Amendment, that would mean that platforms would have a plethora of explicit content, spam, undesirable speech, and then it couldn't remove it. No one would want to see that platform or use it. And that's why platforms generally have the ability to moderate their own content.
So in response to the question talking about the public square, social media might be the town square culturally insofar as most of the platforms want broad user bases from different perspectives. That's better for their bottom line. But they're not public squares in the way in which a public forum under First Amendment law would have to take nearly all comers.
And if the private social media companies were treated as public fora, as I mentioned, their ability to pick and choose content would be extremely limited, largely negating those platforms own First Amendment rights.
SPEAKER 1: Thanks. Karen.
KAREN LEVY: Yeah. And I'll just highlight one sort of adjacent legal issue that sits alongside the First Amendment issues that Gautam just highlighted, which is the degree to which platforms incur liability for the things that their users post.
So if I go on Facebook and I write something defamatory against Gautam, which I would never do. I like you very much.
[LAUGHTER]
But say I were to do that, right? Facebook faces no liability for any-- if I defamed him, I said false statements that caused harm to his reputation, and he could make some claim against me for that, Facebook would be immune from any-- from legal liability for any of those harms.
And the reason for that is Section 230 of the Communications Decency Act, which was a statute passed in the 1990s that immunizes platforms from liability for almost anything that their users do on the platform that causes harm to other people. There are some exceptions, but they're relatively narrow.
This is an important law that sits alongside First Amendment protections and is an important piece of the accountability picture for large platforms when it comes to content. Now, the initial reasons why Section 230 was passed was because, if you remember, in the 1990s, the tech industry was kind of just beginning to get off the ground. There was a lot of interest in providing some incentives for these companies to build themselves up and to foster innovation.
I think now we can say that those efforts were unequivocally successful. You may have noticed that the tech industry has really expanded over the last few decades. And so now, big tech sits in a very different position. There are questions about to what degree a law like this serves the public interest.
And there are lots and lots of discussions in Capitol Hill on both sides of the aisle in the courts and in the public sphere about to what degree we maybe ought to repeal or reform these laws so that, potentially, there's some more accountability among big tech companies for users' behavior.
So this is something to keep our eyes on, I think, alongside some of the First Amendment concerns that Gautum raised.
SPEAKER 1: Great points. Yeah, and I'll say one of our visitors for the Free Expression Area, Eugene Wallach, just came into town. And he was talking about liability in the context of generative AI. And we'll get to generative AI later in this panel. I'll continue, actually, now switch over and talk-- we've seen, heard about the legal aspects, let's talk about perhaps the social aspects.
Essentially, what we're hearing is companies, they have few obligations based on the First Amendment in Section 230, which Karen was just talking about. So given that it doesn't impose that many constraints on their decision making and they're going to use AI to automate their decisions and steer discourse, what do we know or-- how do we approach it from a social science perspective about how these platform decisions are impacting community and social interaction? So Mor.
MOR NAAMAN: Yeah, so I mean, to me there's a paradox. Without moderation, we cannot have real free expression. And it's not about the companies banning users or restricting users in some way. To me, if they don't do that, if we just let Karen harass Gautum online, he's going to disappear from the platform. He's not going to be active. It's going to be a place where he cannot have his free expression.
And we see that is the need for moderation, to create this environment where people can speak up. We saw it in our research looking at the 2018 election and see the amount-- volume of harassment that political candidates to the US Congress are receiving online.
And that harassment was mostly created-- not all, but a lot of it was created by maybe a small-- a few hundred, a few thousand users that were never banned from the platform, making it really inhospitable to candidates, and potentially, marginalized-- minoritized candidates in particular.
So this is not a free expression that we want. And this was even-- this was in the good days of Twitter, right? When they had that.
[LAUGHTER]
And we know for candidates that were driven away, not just from social media where the voice is important, but driven away from the political process altogether because of that kind of abuse. So this is why moderation is needed. But I think we're here to talk about AI. And the involvement of AI in this type of moderation is particularly challenging as well, right?
So we show the same-- in same research, we show that the abuse is often very contextualized and very hard to understand if you're using generic AI to detect it. So it's very hard for algorithms to do it. That was before generative AI tools were widely spread. So we didn't look at those and the ability of those.
But those present another challenge. We simply don't know how they work. And now we have-- again, the new days of Twitter and X, we have very little transparency to understand how these algorithms are applying our working in our information ecosystem. So that's one.
If we have time for another--
SPEAKER 1: Go ahead.
MOR NAAMAN: Aspect of this is the questions of amplifications and what is this AI and algorithms are doing amplification. We saw that-- oh, maybe let me start. If a journalist knows that his voice cannot be heard on Twitter over the cacophony of everybody else who are participating, some of them bots, some of them human, some of them abusive, some of them not, they're just not going to bother with the platform and the voice will be lost-- lost for free expression.
Just this week we saw a couple of data points from, again, my favorite example to build on X, Twitter. One, they removed the verified label from the New York Times. So now the New York Times account on X is shown as a normal account without any verification. Without any-- while other accounts that pay money to Elon Musk just pay $8-- is it $8 a month? To get the verification to people.
You get not only this sign of so-called trust, but also more visibility to other people, amplification on the platform, and so forth. So this is a serious and challenging problem as well.
SPEAKER 1: Sarah.
SARAH KREPS: Well, it is a difficult problem. And we know that 3/4 of Americans do not trust the content moderation decisions that the platforms are making. And so that's not surprising to anyone. But I think what is challenging is figuring out what to do about that.
And so some of what we've done as researchers is to try to assess and quantify and qualify the cause and effect of different moderation-- content moderation choices.
So one of the things that Facebook, now Meta, Twitter, now X, so Meta had done was to demote the role of politics in the news feed. And the idea-- this was a reaction kind of action reaction to the 2016 election, the 2020 election, and the accusations that these social media platforms were polarizing Americans.
So Meta then reworked its algorithm to demote politics. And so the studies then that were done to evaluate, well, what are the effects of that? Well, it turns out that the effects were that people became less knowledgeable of politics. They were no less polarized. And they were more disengaged.
And so it just points to the dilemma-- the conundrum of, OK, we know there's a problem. But it's not clear that the solutions that the platforms think will solve the problem actually will. And so foreshadowing, I think, something later in our panel is that, again, I think the role of universities is that we can study these questions in a very disinterested way to think about and analyze what are perhaps more appropriate solutions.
How do we continue to understand this evolution of a technology that really hasn't been around all that long? And where the consequences of it, I think, become more and more salient over time.
SPEAKER 1: Great points. And you're exactly right. When social media first came out, that was the whole point. We will engage-- disengaged part of the community, which was wonderful. And then, there was the backlash. And now, the studies actually show there is some validity to that, but it's much more complicated than the earlier stuff and and we haven't quite grappled with all of it.
But this is a perfect time for us to actually talk about both of you for at least more also referred to generative AI. So let's talk a little bit about generative AI. As I said, it really exploded this past year and these are the technologies like ChatGPT, DALL-E.
And they produce very convincing, very compelling text and conversations. I mean you would easily believe that you're talking to another human who might sometimes go off and say random things and fake things in a very compelling manner. But it's a very compelling experience when you interact with them. Same with the visuals.
These technologies are able to produce fake information and a lot of it. Their algorithms running on computers. So with generative AI creating text images and video that people can't differentiate from human generated content, what are the concerns for democracy and free expression and what are the opportunities for democracy and free expression? It'd be great to have-- Sarah, do you want to start with that.
SARAH KREPS: Sure. Yeah. I could talk a lot about the risks, but maybe I'll leave those to my colleagues and talk about the opportunities. Because I think it's really easy to think about these technologies. And I've written more of the doom and gloom AI is going to kill democracy kinds of pieces. And I do believe some of that because I worry about that way in which generative text will pass as human written text.
But I'll bracket that for a second and try to put on my techno optimist hat, which I wear with pride. And it would suggest that, again, kind of building on this engagement piece of things, which is that there are a lot of people on the sidelines. There are a lot of people who don't have a voice.
And one of the things that generative AI can do is provide more access. And so I'm working with some democracy groups in the Global South to try to understand the ways in which generative AI can help disaffected communities, disadvantaged communities.
And so just to give an example of that. We can train these models in Indigenous languages. We think a lot about the English language, OpenAI model. But we can train it in different languages to provide access to disadvantaged groups.
I've been working with local mayors, for example, who are using these-- they can't-- their budget is limited, but they know that citizens in their town don't understand where to pay their taxes, don't know how to pay their water bill. It's not just paying your bills. But they train these chat bots, essentially, to help citizens become more engaged, understand their communities, and be more involved. So I think there are real opportunities there.
And then, we've also been working on tools that can help elected leaders, for example, who are receiving huge inbound problems. They get a lot of emails. Their goal is to represent-- or they're required to understand citizens. But how can they do that when they're overwhelmed with content? And so these same tools can help process data visualize what that inbound looks like that they might be too overwhelmed, overcome with other tasks of representing constituents to help them understand what are the needs of constituents. So I think it can work both ways, the bottom up and a top down to bridge that gap in a democracy between those policymakers and citizens.
SPEAKER 1: Mor.
MOR NAAMAN: I'll take the other side then, maybe being less of a techno optimist. I think we're headed towards what I call the post-human web. We right now have the 2023 web of human knowledge and we believe that most of the content there was created by people, actually written by people. Some of it high quality, some of it not as high quality.
But it represents a somewhat authentic view of human opinion and expression. This is all going to be frozen in time in 2023. Because from now on, we're going to have AI that can generate high quality looking content at scale and put it all over the web, all over our information system. It will be part of a communication.
And when we look at interpersonal communication, we'll have somebody writing bullet points and then the AI expanding the bullet points to an email. And the receiver will take that email and it will turn it for bullet points for him. And then, they'll just for them-- everybody will be communicating via AI in a way that will cause a lot of mistrust. And a lot of mistrust in the information that we see online that we already are prone to mistrusting.
A lot of mistrust in our interpersonal communication, right? We've shown in our research that when people evaluate content that they think may be written by AI, then trust will immediately drop. We called it the replicant effect. And trust in the other person, right? And trust in the communication content.
And it will happen on the web as well. Another thing is that we don't even have machines or we don't have AI that can detect AI. And people even worse than AI in detecting AI written content. So now, we've shown-- I think we also alluded to it that AI can generate Shakespearean texts, poems, and media articles as shown in Sarah's work. Emails, Airbnb profiles, all kinds of text and images that humans cannot distinguish from human text.
But even worse, our recent research last year we had the PNAS paper showing that it's not that the people can't distinguish, people have specific and wrong heuristics about what text is written by AI and what text is written by humans. And AI, since those heuristics are predictable, the AI can then take advantage of them and create text that is we call it more human than human, right?
So suddenly, we're in a disadvantage because AI knows how to exploit the human weaknesses in evaluating content. So I think that's going to be a big challenge for information and communication ecosystem. The other one-- maybe give me a
[INTERPOSING VOICES]
SPEAKER 2: Keep going.
MOR NAAMAN: The other one is the biases right the amount of bias that could be introduced by AI into our information system, ecosystem, sometimes maliciously, and sometimes less so. I mentioned already the creation of large amount of content by maybe misinformation, maybe disinformation actors, or maybe just someone who wants to say how they like [INAUDIBLE] sandwiches on Main Street.
So that's one. But the other is bias that will seep into our communication and media in ways that we don't necessarily expect. So we've shown, again, in our research that even the little smart replies that you get on the bottom of your Gmail that says, yes, I'll do it and I'll be happy to, right? They will change your language to be more positive than you would do without them. They can change-- the autocomplete that you all get will change the content that you write and how you write stuff.
And even in most recent research, we showed that if we ask people to write their opinion about an important societal problem like should we abolish the death penalty or should we use standardized tests in education, we had people write those. And we gave them AI autocomplete suggestions that were biased.
So the AI always wanted to argue a certain direction. And it turns out that people not only wrote more in the direction that the AI was nudging them to. When we asked them later what they really thought, they also nudged in the AI's position. So this is a very-- I think, dangerous and worrisome.
SPEAKER 1: It's dangerous but it's also interesting because one of the--
[LAUGHTER]
In the following way, one of the things-- one of the problems we face in society is you can't get the polarization problem. You can actually get people to change their point of view. And apparently, maybe these are mechanisms we can try thinking about to have people see alternate points of view, right?
And so, hopefully, longer term, this is the way of using AI for good. Having people become less entrenched in dogma and more open to thought.
MOR NAAMAN: Thank you, KB.
I'm a techno optimist, as is obvious, perhaps. Karen, do you want to comment?
KAREN LEVY: Yeah, I just wanted to add just two quick thoughts. The first is Mor and Sarah alluded to, at some point, it will become very easy for there to be a lot of AI generated information online. Maybe we're there already. But in some ways, we don't even need a lot to start to see some of these effects.
There's a concept that some legal scholars have espoused called the Liar's Dividend. And the idea behind the Liar's Dividend is that you only need a few pieces of disinformation, misinformation that appear believable. And the problem doesn't get constrained to those few pieces of information, right?
If you start to recognize that maybe I don't know if something is real or not, the question is not then about those specific pieces of content. It creates a much more generalized doubt, right? And then, that doubt can itself become weaponized.
So we see this a lot. If you think about your favorite political scandal of the 20th century or 21st century, we can all think of our own, in which there's been some form of accountability in the form of seeing someone on video saying something unsavory or seeing a picture of somebody with somebody else or something like that, right? The types of accountability that we've used forever to know more about what our political figures or public figures are actually doing and saying.
Now, there's a tendency when figures are sometimes caught doing or saying things that are unsavory or controversial to say, well, how do you know that that was real, right? How do you know that that's not a Deepfake? How do you know that that's not a doctored image? When, in fact, there really are no indicators that those images were fake, right?
But it's the existence of these few pieces of false information that allow for those claims, right? And that potentially creates this much broader crisis of trust and accountability than just those few bad actors creating a few bad pieces of information would engender.
Let me just add one other quick note. So I have two young kids, I have a third grader and a sixth grader. And my sixth grader has started bringing home assignments from school about information literacy. And I was really excited to see her doing this. It really scared me to see.
Because almost all of the indicators that she's being taught about how to evaluate a piece of information, what is the source, what are the likely biases of the person who put this out into the world, what did they have to gain from this? What else can you learn about where this piece of information is coming from? How can you triangulate this with other pieces of information online? Should you just Google it? What are the different ways you could learn more about this topic?
Almost all of those I think could potentially be undercut by some of the forms of information pollution that Mor and Sarah and Gautam have referred to. Because it just becomes much more difficult to assess the source of information. So I'm really concerned about how we think about building new information literacy tools given that the ones we may or may not be so successful in the coming age.
SPEAKER 1: And that's a great point. And I'll just mention something, actually, Martha mentioned in her talk. There was a committee-- a university-wide committee on generative AI in education and pedagogy thinking about how generative AI, these kinds of technologies, could be used potentially by students to not do the work they're supposed to do. And there's a lot of concern around academic integrity.
But there's a lot of exciting opportunity because they can personalize their education. They can learn with a tutor that's always available. And that's sort of the promise of these kinds of tools. So it was a university-wide committee I co-chaired with Alex Coleman. And we made a bunch of recommendations around that.
But one of the questions that we say in-- you can forbid its use in your class if that makes sense. But you can also allow its use with attribution. And the big key part is the students need to take ownership of understanding and making sure that the veracity of the information exists. That it's valid. That they have made sure of that not by asking AI, again, is this really accurate?
If all of you know the ChatGPT lawyer case, which is terrifying, not only did it make up all kinds of stuff, the lawyer asked the AI, so is this really all accurate? And the AI said, absolutely, it's all accurate.
[LAUGHTER]
And that is not an acceptable standard for our students to hold themselves to. So we believe we want to educate our students. And a lot needs to be done. Sixth grade is an early stage. It will hopefully see back into elementary school. But we need to design those tools here on campus.
Talking about that aspect, the Liar's Dividend, so the question then is, what are the possible approaches for regulating new generative AI technologies and social media companies? Are there any promising directions? So the major platforms seem to be working with the White House, for example on voluntary safeguards.
I'll say, I'm a technologist, I come from the tech world. That is just unheard of, right? This is a community that's always been, get out of our way, we want to move fast. And we'll break things, but we want to move fast. And so for the community itself to say, let's think about regulations, is kind of shocking.
But then, the question should be asked, can a voluntary and a self-regulatory approach ever create meaningful standards? So Sarah, do you want to take a stab at that?
SARAH KREPS: Yeah, thanks, KB. So I've been working actually with-- through a grant from the Jane Family Institute on an AI regulation project with-- it's a white paper that I'm working with a JFI and PhD student here at Cornell on looking at these regulatory questions.
And it's been a good excuse to reach out to my law school colleagues and colleagues in different departments on the rest of campus to think about these big regulatory questions. Because we have new technologies. And the question is, do we need new laws for these new technologies? Or are there frameworks that might be older but that might still have new relevance? And so that's what we're working on in this white paper that is coming out in the next week or two to coincide with this upcoming AI Summit.
And so one of the things that we look at is that the 1976 Copyright Act. You think, well, what could the 1976 Copyright Act have to say about generative AI? But it's not clear that it doesn't apply. And so we're thinking about how we can understand these new technologies through the existing laws? And if not, what do we need to be doing?
And so in that regard, this-- so these platforms are coming together in part through these-- the cliche public private partnerships with the White House to come up with some sets of norms and standards. So it's OpenAI, it's anthropic, and they're trying to think through, what would that look like? How do we think about transparency? How do we think about, is it open source?
And I think that's a good start. I don't think it can end there because-- for a couple of reasons. It's sort of who's guardian the guardians question. Because they're the people in control of these models. So maybe they're not the best suited to regulate themselves.
But it also leaves out a lot of the smaller companies that are working in this space. So I think that's a good first step. It's in cooperation with executive branch agencies. But I don't think it can end there. But again, like I was saying at the outset, these are very kind of trial and error kinds of things. We have new technologies and we're going to have to see over time what's working what's not and then try to rein in the excesses. And when we find out that these are eliciting distrust, figure out new solutions.
SPEAKER 1: Yeah, you mentioned copyright law. I was talking to faculty members who say, if you teach copyright law, in the past, it used to be just dry and nobody-- the students weren't engaged. And now it's back. The old is new again. Everybody cares about copyright law. So if you're in copyright law, it's back. Gautam.
GAUTUM HANS: So in between finishing graduate school and pursuing my academic career, I spent four years working on technology policy as a interest attorney in D.C. With a focus on privacy and free speech. And so I spent a lot of years, a lot of meetings talking about privacy regulation, commercial privacy regulation, self-regulation, whether or not there was a domestic legislative approach.
And having been through some of those conversations, disputes, battles I'm a little skeptical of good faith from the companies absent any interest convergence. So sometimes we see on surveillance or free speech that the interests of the public and the interests of the companies are more aligned. But I think when it comes to these technologies, I'm not as confident that will happen.
And we also can't always feel confident that the companies will undertake a regulatory process with any true vigor or speed. So I think sometimes when conversations begin from the companies with government entities, it's not necessarily because they truly want to come to the solution, but there might be some other economic or political considerations that they're engaging in.
But I think this is complicated and I think having worked with many people in companies assuming that it's always about the bottom line is also, I think, a little simplistic.
But even beyond the questions of administrative law and policy, another hot area, that-- there's the questions in this specific area of technology regulation, in particular, AI, we live in a non-ideal world. Domestic policymaking on a problem of this scale and complexity with any teeth seems pretty low. There's a lot of challenges and people may have noticed that regulation is challenging to implement domestically and internationally.
So perhaps I think the model that Sarah was describing is the best outcome and the best approach that we can engage with at this moment, assuming that it doesn't necessarily calcify the domestic policy debate in terms of public sector action. Because we've seen that with privacy and data. That's why I think my own experiences have made me a little burned and a little broken.
[LAUGHTER]
SPEAKER 1: It's getting worse.
GAUTUM HANS: When I mentioned, too, the international component, that's a whole other area that I think-- we don't have time to get into in this panel in depth, but this is not just a domestic problem. We've seen how the social media companies and technology industry have had effects not just domestically on our own political and social movements, but in many countries.
And I think we can all think of the countries we know well or have connections to and how social media and technology have changed those domestic dynamics there in ways that we in the United States have not always been as cognizant of or as responsive to.
So truly I think a transnational solution, perhaps even less likely than a domestic one, is necessary. But also, we have to both I think focus on what we can achieve now and what we're going to continue to work through in the medium in long-term.
SPEAKER 1: And that's a great point. And I think that, actually, is one of the pressures that technologists are facing. Even if all the big tech companies-- and they all-- many of them work-- they do understand what they're doing. They can't just sit and agree between themselves because there's all the international actors are going to do completely independent things and they can't. Therefore, they have to bring everybody along or they all have to go forward. Karen, do you want to comment?
KAREN LEVY: Yeah, I think I'll say maybe two things about policy and regulation. One is that-- and I think rightly so-- a lot of the regulatory conversation has been about what constraints should we place on companies primarily around AI and generative AI? And I definitely think that's a conversation that we definitely should be having and I'm glad to see that evolving.
Another important question, though, is how do we think about distribution of the benefits of these technologies so that they're not only serving the interests of the powerful or the interests of capitalized organizations, but that they're being put to use for the types of city and state governments that Sarah was alluding to? Or that they're being put to use to help make more efficient community serving organizations who could really use these tools for allocation or for distribution or for education of citizens.
Some of my own work has kind of looked at what are the right kinds of federal funding models to ensure that the benefits of AI get distributed equitably? What are the opportunities and the challenges that cities and states face when they're trying to implement these tools for supporting their civic responsibilities? So I think it's critical that we also keep our eye on how those benefits don't just get concentrated, but can be distributed to the people who most stand to benefit from them.
The other thing I'll say is that it's exciting, I think, as a slight counterpoint to Gautum. I also feel like I've been a little bit burned by some promising discussion of tech policy in the past. But I do feel like I'm excited that there's a lot of public attention to-- in Congress and in the Office of Science and Technology Policy at the White House and in other contexts about regulation of AI and of technology more broadly. I think there's a long way to go between where we are now and actually getting things on the books that will have the teeth that I think are required also.
It's important to recognize that those all sit alongside this other more long standing set of conversations. So I alluded earlier to Section 230 of the Communications Decency Act. These very basic questions that really don't have to do with AI necessarily, but will eventually, or are beginning to intersect with questions about AI and its use in content moderation and platform regulation, right?
So I mentioned, there's a lot of interest in reforming or even repealing Section 230 and lots and lots of competing proposals in the House and Senate about what that might look like. I'll spare you the exhaustive list of what all the proposals are. But some of the common areas of discussion are maybe we should say that companies only get this very attractive liability shield if they do other things that we think would be publicly beneficial.
If they're transparent about their use of AI in content moderation. Maybe we require them to have a particular stance towards misinformation or health misinformation. This is one that you hear a lot, maybe we require them to be neutral in how it is that they go about amplifying or depressing content on the platform.
A lot of the issue with these proposals is that when the rubber meets the road, implementation of a policy like neutrality is almost impossible to, first, all agree on, and then, to actually implement in practice. We have some precedent for thinking about this from broadcast television and the Fairness Doctrine.
But when you're operating at the scale and speed-- the global scale and the just millisecond speed that these global platforms operate on, it becomes much more complicated to decide what something like neutrality means. So those are really complicated issues I think that we're actively wrestling with and that will necessarily be part of the conversation about AI regulation too.
SPEAKER 1: That's great. Yeah, and transparency too. It sounds like the right thing to do. It's just hard to do, actually, because the AI itself is not very transparent. The human beings don't understand what the AI is doing. So it's all very well to say we'll make the AI transparent, but it is disinclined to accommodate.
[LAUGHTER]
So I'd like to switch and talk a little bit about where Cornell can play a role. As I say, I was a techno optimist. I'm not a burned or broken yet.
[LAUGHTER]
So there are opportunities around democratization of content creation, which is super exciting. And so what can Cornell do to really weigh in on this space of AI and free expression? So I'd love all four of you to share your thoughts on that.
KAREN LEVY: Yeah, I can kick off with that. I think it comes back to something, actually, that President Pollock highlighted in the State of the University address, which is the degree to which Cornell is not a siloed place.
I think we say this a lot, but I continually come back to this in my own experiences here. That it's so unlike other higher education institutions with which I've been affiliated in that I sit in our information science department alongside philosophers and historians and computer scientists and mathematicians and physicists and operations researchers and I'm sure I'm forgetting like probably five other disciplines.
It's just like an incredible mixed bag of people who bring, basically, every form of expertise you could think of to the table. I co-lead a research group called the AI Policy and Practice Research Group, which is about 25 folks who think about AI policy and practice from this variety of perspectives and we collaborate on work. We advise students jointly.
I just can't how unusual that is to be in a place where you can bring law and policy and humanities and social science and technical knowledge and whatever that was all to the table at the same time. And that I think is truly challenging, right? The translational work required to do work that is really integrative and not just like, well, I'll say my piece from computer science and I'll say my piece from sociology, right? But really tries to integrate those perspectives to solve a different kind of problem, a more ambitious kind of problem.
That, I think, is the type of knowledge that we need to generate in order to address these really complicated new technological developments. And it's so exciting. I get really excited to be able to teach my classes where students are coming from all of those different perspectives. To be able to have hallway conversations with other faculty and other researchers who are bringing all of that to the table. It's just like a playground intellectually.
And I think that that offers real promise. Because we then take that knowledge and we apply it to problems on the ground. We engage with policy makers. We try to engage with the public. And that, I think, is the type of knowledge that Cornell is really uniquely situated to generate.
SPEAKER 1: Yeah, Mor.
MOR NAAMAN: Yeah, so I sit alongside all these people and Karen. So it's extra privilege for that. And as Karen said before, I think we're used to new technologies adapted by and used by the powerful institutions in our society. And I think Cornell is a great place to think about how AI can be deployed in the public interest.
And indeed, that takes interdisciplinary, multidisciplinary approach to understand, for example, how to train-- develop and train those AI models. So one small example. I'm involved in an effort here to develop a potential AI Institute of AI and human communication.
And those language models that we are using, ChatGPT and others, they're literally optimizing on guessing-- predicting the next word that you will type in. That's how they work. That's what they do. That's their objective function.
But what if this AI was trained to do something else, right? To create a balanced point of view between everybody who's typing it, right? Some more global goal, right? So to support the mental health of people that are using online chat bots to communicate about their mental state.
So this kind of thinking and how to build that back into the neural networks that power these models is the question that we can address, both with understanding of-- deep understanding of the computer science and algorithms that go into it and understanding of the societal needs and functions. So that's one, like how to develop the models.
The second, as we've seen here before, Sarah's work, some of my own work, understanding how-- well, understanding AI'S impact on society and on humans right to study this. I think Cornell has been a really unique place where we can do that. Again, from someone who sits in government but really have very strong background of what is this thing called AI and how to study it. An example here. Collaborations, right? I had PhD students, in fact, our AI communication project is a collaboration with Karen and others.
I've had my PhD student work with Sarah on her project. So I think these kind of collaborations help understand the impact of AI on society are something that we do well here at Cornell.
And then, develop a new AI application specifically to help in various contexts. Help people, help a disadvantage, I think Sarah gave us a few examples earlier. That's another thing that we can do here well with this understanding. And finally, like we heard on this panel as well, understanding what kind of regulation and what kind of policies are needed, not just by-- form our government, but also for our tech platforms. And understanding-- providing them with information of how we think or what we think-- we think would be a good way to address the challenges that they see with AI. Again, something that we do here well.
So I think, overall, I think Cornell is well positioned to address these very, very difficult challenges with the talent that we have here. Not just on stage.
[LAUGHTER]
SARAH KREPS: Yeah, I would just highlight that the role of a university in general and Cornell, in particular, and respond to this question. So I've worked in industry and in government and in think tanks. I feel very strongly that the University has a credibility that-- it's not that these other institutions don't, but the AI labs are doing great work.
But as Gautam said, there's an assumption that they have some incentives baked into what they're doing. And the work in a university, because we're kind of disinterested third parties, I think the rigor of our analysis can really have even more of an impact because of that.
And so within universities, and here I will acknowledge my bias, I think Cornell is uniquely positioned to have a voice in this moment. And I thank the provost for the AI radical collaboration because I think that was great in getting the cross-college collaborations going. Identifying who's working in this space to really generate that-- those collaborations across different units. So I think that's been really successful.
And then, within the classroom. I think, like Karen, I teach these classes that have students from across the university. I taught an AI law ethics and policy class. They were SC students, IS, Arts and Sciences. I had a law student engineering. It was fantastic.
And I think it's really important for us to think not just about the research, but our students are going off to work on these platforms. And so we have a role to play in opening the aperture and helping them understand it's not just about ones and zeros. But what are the bigger questions they should be asking about the technologies that are developing?
And so in that sense, too, given just how multidisciplinary, interdisciplinary we are at Cornell, our role as educators to this next generation of technologists, I think, cannot be minimized.
GAUTUM HANS: So lawyers, we tend to gravitate towards problems, risks, challenges, sometimes solutions.
[LAUGHTER]
We try. But there was a distant time before I was a lawyer and I was an English major. And I really take that humanities background very deeply into my work as a lawyer. And I think AI, as we've talked about and as we've seen, provides so much opportunity for creative expression, for humanistic questions that animate much of what we do here at Cornell and in academia more generally.
And I really echo what has been said about interdisciplinarity, a term that academe and universities love to talk about. But this is my second year on the Cornell faculty. I was previously at an unnamed other institution where they also talked about interdisciplinarity quite a bit. But I think here the practice is much more, I think, for me deeply felt. I think that the strengths that we have in so many departments and schools and disciplines is matched, I think, by the interest in collaborating across those. And that's just not true at every university, but I think it is quite singularly true here. And I'm lucky to be able to learn from our panelists and from many others in the university on these topics because lawyers also don't have all the answers. You can tell your lawyer that.
SPEAKER 1: No, really?
[LAUGHTER]
Don't say.
GAUTUM HANS: And I think we talked a little bit too about how existing technologies create similar existential challenges. As a free speech person, I think how this is dated back even not with digital technology, but to analog. The printing press itself was thought of as a quite provocative, and truly was a quite provocative piece of technology that really changed human society.
And that also facilitated individual expression at that time unparalleled and unprecedented way. So AI creates these challenging puzzles for law, computer science, policy, and our strengths. And also, our strengths in the interplay of those disciplines, I think, means that we and our students are really well positioned to address these challenges.
And for me, I've always been a public interest attorney. And thinking about those challenges to promote the public good, the greatest good, is really central to my work. And I think we've tried to implicate and inculcate those values in our students as well.
SPEAKER 1: Thanks. So while we are not naming universities, before I came to Cornell, I was at universities that will remain unnamed. But were institutes of technology one from Cambridge Mass.
[LAUGHTER]
But will remain unnamed. And that I love Cornell because of this breadth of disciplines. And everybody willing to engage across campus. There is just really no barriers here in exchange of ideas, which is just absolutely beautiful. And I hear that now. I'll put my Dean hat on.
When I talk to other colleges of computing that are establishing themselves, they like this model. And they find they can't implement it in their university because the barriers are too high between disciplines. Whereas, we just have that in our DNA here where we work across-- we don't have barriers, and so we work across disciplines. So I love that about what we do here.
So we're getting close to the end of the hour. I'd love each of you to just a last parting comments, techno optimists and techno broken optimists still, hopefully, and tell us what you think.
GAUTUM HANS: I started by talking a little bit about the First Amendment. And I think those of you who have an interest or experience the First Amendment know that there are all these questions about First Amendment and regulation. And I am someone who is very deeply committed to the values and the principles behind the First Amendment, but I don't think it is inconsistent with regulation.
And certainly, with so many pressing social issues, the First Amendment and regulation have to coexist. That doesn't mean that the government can do whatever it wants when it touches on things that implicate speech. But I think there are ways to craft First Amendment compliant and constitutional regulation and in an area that is so challenging but exciting like AI and technology more generally.
We have to create a world and a jurisprudence in which regulation of these technologies and constitutional rights and values can be simultaneously upheld. We can't hold two competing thoughts in our head. Particularly, when it comes to free expression, I deeply believe that free expression and many other things can coexist individually and at the societal level.
KAREN LEVY: Maybe I'll just pick up on Gautum's point. One thing that I talk about a lot in my classroom-- my classrooms that also involve folks from across campus. I co-teach a big undergraduate class that has 40 different majors in it. So nobody's coming in with the same background.
And the key, I think, to your point, Gautam, is to find the way to take advantage of that and not to use it as a reason not to ask hard questions, right? Or not to say, OK, well, I guess we'll just never understand each other.
What I talk about in my classes is intellectual generosity. And I think that free expression in the university setting is best realized as intellectual generosity. What I tell my students I think that means to them is reading and hearing ideas with generosity towards what can they learn from this. They don't have to agree with every bit of it, right? You're allowed to-- it's good to be critical in the sense of rigorous in your thought process.
But sometimes, I think we jump too quickly, especially in-- maybe in higher ed institutions where students sometimes tend to think that the way to distinguish themselves is to show that they're smarter than what they're hearing. And I think the more important meaning of critical here is rigorous where it means find what's good here. Find what you can use. Find what causes you to reflect on something differently and integrate that into what you now know about the world.
And it also, I think, this is a really critical part of free expression, requires being generous with yourself and giving yourself the capacity to change your mind about something. The capacity to revise your ideas over time. A sphere in which to be in conversation with and evolve your ideas based on the benefit of being in the room with people with different viewpoints than your own.
That's something I think I've really reflected on a lot over this free speech year. And I really value that a lot. It definitely makes my research better and makes it a much more pleasurable place to teach.
MOR NAAMAN: So again, I'm going back to, I think, this year 2023 will be-- is a very exciting time. We're seeing another-- yet another technological revolution that will change the fabric of our society in ways that we don't know yet. So most of you in the room have now seen the AI. Before that we've seen social media. Before that we've seen mobile phones. Before that we saw the personal computer and the internet. I guess most of us accept the new undergraduate trustee. They have nothing.
[LAUGHTER]
Every time that we have such technological and social change, there is an opportunity to steer the change to the direction that we think is beneficial. And I think that's what we should be thinking about it now and thinking about it urgently because this technology-- we've seen how fast these new technologies spread. And they will change us in fundamental ways very, very quickly.
So the time to think about it is now. And it's good that we're having this panel, but we've also been on this for a few years now.
SARAH KREPS: Yeah, no, I share that view that we're in an exciting time. And that I think the key challenge will be how to harness the good from this technology while guarding against the risks. And I think that we're making progress toward finding that balance by asking the right questions.
Because we're still, I think, on the early-- some of us have worked in the NLP and AI space for decades. But the public facing aspect of this and the way in which it's being applied for scientific inquiry I think is relatively recent and nascent.
But I don't think we're going-- everything I read and our colleagues in neuroscience like Jesse Goldberg, the work he's doing with AI and talking about the fact that it's unlikely we will find cures for cancer without these technologies. So we don't want to-- certainly don't want to forego that. But we want to guard against the misuses.
So navigating that, again, kind of will take asking the right questions by the right people. But also, I think, a very adaptive approach. Because the technology is changing so quickly. So the solutions that we come up with in 2023 may no longer be relevant in 2024. So I think continuing to evolve and adapt on our thinking will be key here too.
SPEAKER 1: That's a great point. Yeah. And I'll say you're exactly right. It's new for some of us. I've been in computer science for a long time. But 10 years ago when social media companies came, they just were all about the tech and they just weren't thinking about society other than it's going to be awesome for everybody, right? That was the general world view.
And this time, I'm seeing a completely different take from the techies, right? So they have a much more nuanced and concerned approach. If you see the kinds of letters of concern that are going out. They want to engage with social scientists and policy makers and lawyers and political scientists, which is really heartening.
I'll say, on the flip side, companies and nations are competing and they are not collaborating. And this is something, actually, I've raised in various review panels I serve on. Somebody floated the idea from one of the big companies, one of the big tech companies.
Of course, maybe AI safety is an area we can collaborate on. And then, everybody went, no, there's too much of a competitive edge to be gained by doing a good job on AI safety. So their head is still not quite there, which means we in academia have that much more of an important role to play.
Because we are neutral and they recognize that. We not only are neutral from that point of view, we're not trying to make a profit. But they also understand that we have the world leaders in thinking about the hard problems and the disciplinary strength that no single tech company can ever have.
And so that's, I think, that's a great responsibility. And it's actually experts like you who need to do that work. So I hope you'll continue to do that and collaborate with each other. With that, I want to thank you all for doing this panel. I want to thank all of you, and enjoy TCAM.
[APPLAUSE]
Enjoy TCAM. Have a great weekend.
The Joint Annual Meeting featuring Kraig Kayser, MBA '84, Chair of the Cornell University Board of Trustees, and Arturo Carrillo '96, MEng '97, Chair of the Cornell University Council. President Martha E. Pollack's State of the University Address will follow, leading to an inspiring Keynote Program after a short break.
Keynote Program: Artificial Intelligence and Free Expression Artificial Intelligence is here and changing our everyday lives faster than we can keep up with. At the same time, free expression is being challenged from all angles. And with the explosion of natural language processing and public access to generative AI tools like ChatGPT, technology is rapidly shaping how human beings express themselves, get work done, understand their societies, and interact with others.
Kavita Bala, dean of the Cornell Ann S. Bowers College of Computing and Information Science and steering committee member of the “The Indispensable Condition: Freedom of Expression at Cornell” initiative, will lead a panel discussion on the profound impact of AI on speech and free expression.
Faculty from across colleges and campuses will explore the challenges new technologies bring: ethical content moderation, personal and social expression, global perspectives on AI, its impact on democracies and policy, the First Amendment, harnessing technology for the public good, and more.
Learn how Cornell is leading the way and why we’re uniquely positioned to shape AI’s and our own future.