Why Not to Panic About A.I.

college of arts and sciences logo
Center for Policy Studies
Public Affairs Discussion Group
Why Not to Panic About A.I.

headshot

Kalle Lyytinen, Ph.D. – Iris S. Wolstein Professor of Management Design

Friday September 29, 2023
12:30-1:30 p.m.
Meeting Both In-Person and by Zoom

Alternate Room: Mather House 100
Case Western Reserve University

Dear Colleagues:

It’s about time the “Friday Lunch” addressed controversies about “A.I.” I promise I wrote this explanation myself.

The launch of ChatGPT by OpenAI on November 30, 2022 within a week attracted about a million users trying out the technology, including one researcher’s seven-year-old daughter (she liked it); and over the following weeks a burst of coverage about the possible consequences of Generative Artificial Intelligence. It also set off a weird mix of fascination and concern. Immediate opinions ranged from Paul Krugman wondering about effects on skilled jobs (December 6) to Zeynep Tufekci wondering what Plato (who apparently didn’t like the alphabet) would think (December 15). One doesn’t have to go far in the Marvel Cinematic Universe to see the range of so-far fictional possibilities. But public attention was turbocharged when on May 30, 2023, the Center for AI Safety issued a “Statement on AI Risk,” declaring that,

“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously…

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Numerous leading figures in AI development signed the statement, explaining that even though there were disagreements about the details of the risk there was wide agreement that there was plenty to worry about. Given these comments, it should be no surprise that Pew’s polling in August showed much more concern than excitement about AI among the general public.

So what is all this about? “Generative Artificial Intelligence” refers to systems that can create text and images, sound and video, with ChatGPT itself creating text. Other programs, such as the recently updated DALL-E 3, can generate images to fit text, or mimic a person’s voice. It is an elaboration on traditional AI, which one overview explains “has largely been used for analysis, allowing people to spot patterns and make predictions by assessing huge sets of data.” A relatively straightforward example of that technology is training machines to recognize tumors from scans, based not on instructions about for what to look but from associating later evidence of tumors with the material within scans that the machines record – without the users necessarily knowing the “artificial” logic. But it might also be used, for example, to identify “suspicious” behavior or even targets for drone attacks.

ChatGPT gets so much attention because it could engage almost anyone who cares to engage with it – unlike, say, medical imaging machines or batteries of military drones, which have rather more specialized users. It is fairly easy to imagine ways that Generative A.I. can cause lots of trouble, though the biggest concern on campus, the potential for rampant cheating on assignments, does not seem extinction-level. But other A.I. applications also have long caused concern among experts and some policy-makers.

A February, 2018 report, for example, addressed “The Malicious Use of Artificial Intelligence” (and Machine Learning, which is sort of a synonym). It emphasized both that AI systems could be designed in a threatening way (e.g. surveillance in an authoritarian state) and that AI systems that were not obviously threatening could be hacked – “e.g. the subversion of military lethal autonomous weapon systems.” The European Union has been working to develop a regulatory framework, from a White Paper on “A European approach to excellence and trust” in February of 2020 to a European Commission proposed regulatory framework in April of 2021 to the EU Parliament supporting a draft “AI Act” in June of this year – though actual adoption is uncertain because it requires agreement among the Parliament, the European Commission, and the states on the European Council.

But such concerns are countered by the technological optimism and promotionalism that seems central to our current culture. Thus in late June, “(m)ajor tech founders, CEOs, VCs and industry giants across Europe… signed an open letter to the EU Commission, warning that Europe could miss out on the generative AI revolution if the EU passes laws stifling innovation.” There is little reason to believe that the U.S. political system could generate a coherent response to much of any threat.

At the same time, both hope and fear share two problems. First, both assume that AI will have great effects because it will be greatly effective. From my own observations of applications in medical care, I have my doubts. Second, they don’t tell us much about how organizations can or should combine human and machine input or activity. It’s interesting to say we should be concerned, but what do we know about that?

The interactions between humans and machines that learn is one of the subjects of Professor Kalle Lyytinen’s current research. He argues that we are seeing “new, emergent, sociotechnical systems where machines that learn join human learning and create original systemic capabilities.” These “will change many facets of the way we think about organizations and work.” Good? Bad? Mostly a waste of money and time? I’m not sure how he will answer, but it makes sense to ask research questions rather than panic! For the moment…

In-Person and Virtual Attendance

This Friday’s meeting has been moved from the Kelvin Smith Library to Mather House 100. Mather House is the building in between the Thwing Center and the Church of the Covenant. The main entrance faces east, towards the Church.

We will also meet in Mather House on Oct 13 and Oct 20.

We continue also to offer the meetings on Zoom. We do require pre-registering so as to avoid “zoom-bombing.” The pre-registration link is posted below.

The discussion begins at 12:30 p.m., but the room should be open no later than Noon. We try to have beverages and refreshments set up soon after that. Participants should be able to sign on to Zoom also by Noon. But please remember not much will be happening online until the talk begins at 12:30 pm. You do not need to show identification to enter Mather House, but I don’t understand why anyone would walk around without any, anyway.

Zoom participants should speak up when asked for questions or comments, or submit thoughts through Zoom’s chat function. Please keep yourself muted until you are choosing to speak.

Each week we will send out this newsletter with information about the topic. It will also include a link to register (for free) for the discussion. When you register, you will automatically receive from the Zoom system the link to join the meeting. If you do not get the newsletter, you should also be able to get the information each Monday by checking http://fridaylunch.case.edu Then if you choose you can use the contact form on that website to request the registration link.

This week’s Zoom link for registration is:

https://cwru.zoom.us/meeting/register/tJUlc-ChqzorHtwUow0Ngxke33zm0wsxAvK8

After registering, you will receive a confirmation email containing information about joining the meeting.

Please also e-mail padg@case.edu if you have questions about arrangements or any suggestions. Or call at 216 368-2426 and we’ll try to get back to you. We are very pleased to be partnering this semester with the Siegal Lifelong Learning Program to share information about the discussions.

Best wishes for safety and security for you and yours,

Joe White
Luxenberg Family Professor of Public Policy and Director, Center for Policy Studies


About Our Guest

Kalle Lyytinen is the Iris S. Wolstein Professor of Management Design; chair and professor, design and innovation; and faculty director, doctor of management program. Lyytinen’s research helps define how rapidly changing digital innovations shape organizations. His work helps organizations know how to identify, absorb, manage, implement and be transformed by digital innovations. His recent projects have focused on engineering practices, telecommunications and software development organizations. Lyytinen studies the adoption of new technologies, new forms of collaboration, and new ways to determine system requirements.

Lyytinen joined the Weatherhead School of Management faculty in 2001. Since then his teaching interests have focused on digital innovation theory, new business venturing, design theory and methods, research methods and theory.

Schedule of Friday Lunch Upcoming Topics and Speakers:

October 6: COVID-’23 and Beyond. With David H. Canaday, MD, Professor of Infectious Disease and Associate Director of Research for the Geriatric Research, Education, and Clinical Center, Cleveland VA.

October 13: To Be Determined. Alternate Room: Mather House 100

October 20: One Semester Away from Crisis: Small Colleges and American Higher Education. With Tom Bogart, Ph.D., Visiting Professor and Chair, Department of Economics. Alternate Room: Mather House 100

October 27: Storefronts, Communities, and the Changing World of Retail. With Michael Goldberg, Associate Professor of Design and Innovation; Executive Director and Associate Vice President, Veale Institute for Entrepreneurship.

November 3: Dobbs and Doctors. With David N. Hackney MD, Division Director, Maternal Fetal Medicine, University Hospitals of Cleveland.

November 10: Who’s Legally Responsible When “Self-Driving” Cars Go “Eyes Off?” With Cassandra Burke Robertson, JD, John Deaver Drinko – BakerHostetler Professor of Law.

November 17: Axios Cleveland and the Future of Local Media. With Sam Allard, reporter for Axios Cleveland.

November 24: Thanksgiving Break

December 1: Civil-Military Relations in Egypt. With Dina Rashed, Ph.D., Associate Dean of the College for Academic Affairs, University of Chicago.

December 8: To Be Determined.

Visit the Public Affairs Discussion Group Web Site.

Center for Policy Studies | Mather House 111 | 11201 Euclid Avenue |
Cleveland, Ohio 44106-7109 | Phone: 216.368.6730 | padg@case.edu |
Part of the: College of Arts and Sciences

© 2023 Case Western Reserve University |
Cleveland, Ohio 44106 | 216.368.2000 | legal notice