AI generated image of hawks with mountains in background

Emphasis on Intelligence

Faculty and alumni alike view the Age of AI with excitement…and caution.

by Mike Barone

Hartwick has always exposed students to the latest advances in their fields. From business and computer science to biology and nursing, they learn using state-of-the-art equipment, industry forecasts and best-practice methods whenever possible. Yet today, as artificial intelligence (AI) creeps into virtually every sector imaginable, a number of faculty members and alumni are weighing in with interesting perspectives. While they agree AI usually saves time and money in endless scenarios, they worry about its pitfalls and unintended consequences, ranging from issues with information quality to displaced workforces to ethical dilemmas.

Perhaps no one is doing that more than David Polgar ’01, founder of All Tech Is Human. The attorney and educator founded his nonprofit think tank in 2018 in response to witnessing an immense need to strengthen the nascent Responsible Tech movement and ecosystem. The organization has united more than 55,000 individuals across the globe in its seven-year history, with the goal of addressing major technological and societal issues to create a tech future aligned with the public interest. It works like a micro United Nations, he says; in fact, he has worked with the U.N. multiple times in recent years on topics such as reducing online gender-based violence and ensuring trustworthy elections across the globe.

“There was no more important issue,” Polgar said, as he observed developers, engineers, manufacturers and social media moguls making decisions with little regard for human health and safety. “I saw it as problematic that we were leaving all the decision-making solely to the tech industry. We need educators, doctors, lawmakers, ethicists and everyone in between involved.”

Polgar likens its evolution to that of the automotive industry, which went unchecked for far too long and resulted in needless deaths and injuries due to safety shortcomings that could and should have been regulated far sooner. Its Wild West approach eventually gave way to order and reason, thanks to innovations such as seatbelts, snow tires and airbags, along with legislation such as driver’s licenses, speed limits and car inspections, each designed to protect the users — and innocent bystanders.

“We didn’t just leave it up to the auto makers. We created laws and societal norms to help place some parameters around it,” the former TikTok Content Advisory Council (US) member noted, adding that several lawsuits have been filed against OpenAI and related providers.

Polgar’s thoughts are shared by Stefanie Rocknak, chair of Hartwick’s Philosophy Department and coordinator of the cognitive science & AI program. She is especially concerned about the ethics of AI and encourages her students to ask hard questions about AI accountability.

“We’re looking at the benefits AI creates, and weighing those against the harm it causes,” she said. “It’s coming at (society) 100 miles an hour and we don’t know what to do or how to think about it.”

Rocknak is also a renowned sculptor whose work has been on display in such settings as the Smithsonian Institution in Washington, D.C., and the windows of Saks Fifth Avenue in New York. Perhaps most famously, she created the Edgar Allan Poe sculpture at the corner of Boylston and Charles streets in Boston — a piece that has become iconic among the city’s travel and tourism landscape. Thus, she is instinctively drawn to the many issues AI is causing artists.

“AI creates, but that means that it can take the creative space away from people. Most artists and philosophers are concerned about that,” she warned. “Why? Because we like making art, writing and creating. That’s part of being human. AI will never take that away.”

She is quick to add that she has seen many positive uses for AI’s role within the world of design, such as its ability to create custom prosthetics for amputees. She also marvels at the volumes of existing research that it can find and curate for physicians in such a short time.

“It certainly has its place,” she affirmed, “but it’s not going to replace a human doctor. It can and should only enhance medicine and other fields.”

Weian Wang Hartwick College associate professor in the Department of Business Administration and Accounting and the Department of Computer and Information Science

“The focus is to understand what generative AI is, as well as its potential role in academic and business settings.” 

 

Weian Wang, associate professor in the Department of Business Administration and Accounting and the Department of Computer and Information Science

RETHINKING THINKING

That mirrors the experiences of Megan (Davis) Kennedy ’04, senior vice president for client services at the marketing agency LWD. Her firm focuses on direct response television (DRTV) advertising, which encourages immediate consumer action like calling a toll-free number or visiting a website. Having been with LWD since she was 22, she has witnessed many advances and efficiencies across the marketing communications industry, from digital advertising to social media influence. However, nothing prepared her for all that AI has brought to her business.

“I had to re-learn my job…and that was somewhat humbling,” Kennedy said. “I was learning from people younger than me — but I had to, to ensure I stayed current and valuable to my clients.”

She and her team utilize AI in many ways, with programmatic media buying among the most significant. They use the algorithms of demand-side platforms (DSPs) to evaluate an ad’s effectiveness and adjust their recommendations in real-time, saving substantial time and enhancing their effectiveness as they become even better stewards of client dollars.

“It makes us much more responsive,” she added. “Previously, we’d have spent days on those activities. Now, we have a tool that can summarize it in minutes and show us the opportunities to stand out. The efficiency it brings has been incredible.”

That is the driving goal for most businesses, according to Hartwick’s Weian Wang. As an associate professor in the Department of Business Administration and Accounting and the Department of Computer and Information Science, he sees companies of all sizes using AI to streamline time-consuming and mundane tasks, such as data analytics and information technology — his primary areas of research.

In 2024, he began offering generative AI honors courses to students, using real-world AI applications in marketing, decision-making and predictive analytics to prepare them to enter a marketplace being transformed by AI.

“The focus is to understand what generative AI is, as well as its potential role in academic and business settings,” Wang explained.

In just one year, the content of his course has changed significantly because the technology has already evolved substantially. And, while AI is clearly changing people’s lives and making things easier for many, he makes sure students understand its limitations.

“AI is not as smart as we thought,” he advised. “Students are trusting it too blindly.”

He points to the shift in preference towards AI tools for basic internet search methods seen among today’s younger demographics.

“We (older generations) all use Google — but for students, their search engine is AI, because it’s faster. So, whatever AI generates, they just believe it — and it is far less accurate,” he cautioned.

That has also been the experience of Hartwick Professor of Art Joseph Von Stengel. A self-described product of the Atari generation, he has forged a career by blending computing, artistry and knowledge sharing, from video game design and development to the artistry it inspires. He teaches Introduction to Digital Media; Introduction to Augmented Reality; Tabletop Game Design; and Film and Animation — and he’s incorporated AI into each.

“Digital media is always changing, so my students this year will learn things that are very different from those who took my classes just a few years ago,” he said.

For Von Stengel, this technological revolution is something he’s all too familiar with. He owned a photography lab on Long Island in the late 1990s, just as digital cameras entered the market.

“Within a year, it all went away,” he shared.

Despite enduring that personal loss and unrest, Von Stengel doesn’t fear AI or the fast pace at which it’s changing our lives. For him, it’s a chance to accomplish a great deal more and share that knowledge with others. He likens it to the birth of the internet: only a small portion of the world could harness it at the start. Now, AI is leveling that playing field like never before.

“I’ve done things that used to take three months that now take just a few days,” he attested.

Hartwick Professor of Art Joseph Von Stengel and Bailey Ernst ’25 of Delhi, N.Y.

Professor of Art Joseph Von Stengel shows students like Bailey Ernst ’25 how AI can aid artistic development — as opposed to threaten it. Check out the artist websites of professors Von Stengel and Rocknak

A SURPRISING MENTOR

Von Stengel has also seen firsthand how AI can help people learn — even unintentionally — as he discovered while creating a video game that required Python, a programming language he tried unsuccessfully to learn on several occasions.

“AI generated the code, but I had to figure out how to use it to make the game work,” he explained. “I changed a few things, and it did work — and I realized I actually learned Python along the way! So, there’s not just AI-empowered creativity; there’s AI-empowered learning.”

Von Stengel has also seen its benefits as a sounding board and ready guide, all of which help him and his students be more creative.

“What’s crazy is that it’s helped me see things that I don’t see in my own work,” he insisted. “I mean, sure, I have colleagues who can and do weigh in (on my work) and offer feedback, but they aren’t there when I’m working at 2 a.m.”

Many of Von Stengel’s students share those views, including Bailey Ernst ’25. The Delhi, N.Y., native and triple major (art, literature and philosophy) draws similar parallels to the advent of the calculator, photography — even the printing press.

“The fact that we cannot set type ourselves isn’t a great loss, but the exchange we made in order to get more news and information was well worth the trade,” she said. “People think the world will end every time a massive innovation occurs, but technology and innovation have always been part of art and design. With AI, we’re just moving the goal posts yet again, and that’s unleashed a more abstract approach.”

Ernst, who is currently working as an intern in Hartwick’s Art Department as she weighs her graduate options, goes so far as to suggest that we have a collective moral obligation to accept AI for the sake of tomorrow’s citizens. 

“There is always a threat of generational damage from avoiding innovation,” she argued. “It tends to bother us when the skill sets we’ve developed for decades or centuries suddenly become obsolete. Coding is a great recent example. We were all told we needed to learn it — but soon, most won’t need it at all. We have to ask, ‘How will it affect people beyond ourselves?’”

Stefanie Rocknak, chair of Hartwick’s Philosophy Department and coordinator of the cognitive science & AI program

“We like making art, writing and creating. That’s part of being human. AI will never take that away.” Stefanie Rocknak,
Chair, Hartwick Philosophy Department

THREE BIG PROBLEMS

As Polgar reviewed and considered the myriad concerns with the tech industry, he distilled them all down to three major societal issues: we can’t keep up with it; we can’t leverage our collective tech intelligence; and we don’t have the right mix of people involved.

“We need to diversify the pipeline to allow for a more holistic approach, alter the DNA of tech development and help society catch up to the speed of innovation,” he counseled. “Our ability to create guardrails and think about restraints is painfully slow.”

Ironically, one way he thinks society can gain traction is by building a sandbox: a conducive environment where an independent body would discuss ideas and formulate rules and regulations for universal adoption. This could include, for example, the development of monitoring tools that would flag harmful material without overly infringing on civil liberties.

Polgar is especially concerned with deep fakes and synthetic media, which have been increasingly used to falsely endorse products, create political vitriol or place individuals in compromising and embarrassing situations.

“How do we know when something is new or fake?” Polgar conjectured. “If we erode what we think ‘truth’ is, what does it mean for our future? I want to better align our tech future with the public interest. It affects all aspects of our lives.”

That line of thinking resonates with Jefferson Cruz ’26. The South Merrick, N.Y., native is preparing to graduate with a double major in political science and philosophy, a combination he hopes to leverage in law school. One of Rocknak’s students, Cruz is writing his senior thesis on AI from the perspectives of both majors — and finding the similarities as interesting as the differences.

“We’ve discussed what an ideal society looks like with justice through Plato’s eyes, and how some of those concepts could be applicable to governing AI,” Cruz said. “However, from a political science perspective, the major issue is that it’s a lawless frontier. It’s not in our citizens’ control. It’s not being governed.”

He points to states like Vermont, Delaware and Idaho, which tried to do so recently without success because lawmakers don’t yet know enough about it. The courts, to date, have found it too broad in scope to govern with a narrow law or set of principles.

“Is it achievable? To a certain degree,” Cruz reasoned. “Eventually, some laws will come to pass. What will those look like? Probably an amendment or constitutional-style set of principles. The technology is advancing so rapidly that lawmakers and courts can’t keep up with it. It’s a threat — but to most, with the naked eye, it’s not.”

Hartwick student Jefferson Cruz ’26 of South Merrick, N.Y

“We’ve discussed what an ideal society looks like with justice through Plato’s eyes, and how some of those concepts could be applicable to governing AI.”  Jefferson Cruz ’26, double major in political science and philosophy, a combination he hopes to leverage in law school.

AI’S LIMITATIONS

When used at an elementary level, AI often demonstrates a lack of creativity, resulting in clichéd, predictable outputs. Therefore, Von Stengel underscores the importance of being in control of AI, understanding that each model (e.g., ChatGPT, Gemini, etc.) is unique and carries its own set of limitations. This requires users to prompt (talk to) each one in a unique way — and asking the right questions is critical to getting the output one desires.

He points to an exercise he did recently to illustrate the point to his students. He used the same prompt in eight different AI models to show how unique each one is and how each has different limitations as to how it understands a prompt.

“The average person thinks mostly about the subject matter when prompting the AI, whereas a trained individual understands the depth, visual history and vocabulary of image creation and description,” he explained. “This allows them to prompt the AI at a much higher level that results in responses beyond the generic.”

Rocknak had a similar experience when she asked AI to create a shortened version of her curriculum vitae.

“It literally made up papers that I’d never written!” she warned.

These examples reveal risks that seem counter to the mindset with which most people enter an AI initiative: the desire to save time.

“If you do not know how to use AI correctly, it will fail you,” Von Stengel asserted. “You can’t cut corners. You have to be thorough, detailed and patient.”

“It’s like any (human) assistant,” Rocknak added. “They make mistakes too. You’ve just got to be sharp enough to catch it and make sure you set aside the time needed to make it better.”

Folgar underscores the inescapable role people have in holding the leash on AI, which has been used to create such vile outputs as child pornography and instructions for building a bomb.

“People have their blinders,” he warned. “They’re always surprised when they see others using these tools for evil, but the simple answer is, ‘Because someone asked it to.’” 

In the end, Rocknak believes we should treat AI just like any other member of our lives. 

“You have to ensure that you don’t get overwhelmed or taken over, just like any relationship,” she cautioned. “It’s going to take a certain strength of mind if we’re going to coexist with this.”

Learn More About Hartwick

IN MEMORIAM

Remembering Hartwick College alumni and friends who have passed—as published in the print issue of The Wick Magazine

NYSACAC 2025

The admissions professionals conference aligned closely with Hartwick’s Life Balance College philosophy, which emphasizes student wellness, purpose-driven education and preparation for meaningful lives after college.