Terry Newell

Terry Newell is currently director of his own firm, Leadership for a Responsible Society.  His work focuses on values-based leadership, ethics, and decision making.  A former Air Force officer, Terry also previously served as Director of the Horace Mann Learning Center, the training arm of the U.S. Department of Education, and as Dean of Faculty at the Federal Executive Institute.  Terry is co-editor and author of The Trusted Leader: Building the Relationships That Make Government Work (CQ Press, 2011).  He also wrote Statesmanship, Character and Leadership in America (Palgrave Macmillan, 2013) and To Serve with Honor: Doing the Right Thing in Government (Loftlands Press 2015).

Think Anew

Recent Blog Posts

The Ethics of Artificial Intelligence

The Ethics of Artificial Intelligence

In 1997, able to quickly evaluate 50 billion potential moves, IBM's Deep Blue supercomputer beat world chess champion Gary Kasparov.  In 2016, Google's AlphaGo, based on a game with millions more possible moves than chess and "trained" on a database of more than 30 million moves by experts, beat Go master Lee Sedol.  In 2017, AlphaGo Zero, trained on data it generated itself, beat AlphaGo. 

Within this story is an advance in Artificial Intelligence (AI) from brute-force computing power to machine learning. Deep Blue used pre-programmed human rules.  AlphaGo Zero created most of its rules as it "learned."  Machine learning starts with just bare bones computer code and then searches for patterns in masses of data.  The difference: Deep Blue's creators could explain how it won; AlphaGo Zero's could not.  Or, as Wharton School's Kartik Hosanagar explains in A Human's Guide to Machine Intelligence, "The reasoning behind a deep learning algorithm's decisions is often impenetrable to even the programmer who created it." 

In AI, an algorithm is a set of steps embedded in computer code.  AI algorithms are ubiquitous - and will become more so.  Amazon's buying recommendations ("people who bought this also bought ..."), for example, are based on an algorithm that searches its database for people who bought what you did. 

AI applications go way beyond selling products and increasingly raise ethical challenges.  Applications exist in areas as diverse as self-driving cars, medical diagnosis, face recognition surveillance, and criminal justice.  Boeing and Airbus, already using autopilot, are working on a self-flying plane.  AI applications don't just make recommendations; they make decisions.   In May 2010, Wall Street trading algorithms wiped out nearly $1 trillion of market value in half an hour as they decided and executed sell orders, without human intervention, sparking each other as they went.

Since machine learning "trains" itself on real-world data, bias in the real world can find its way into AI-based decisions.   In a 2016 ProPublica report on algorithms used by Florida courts to determine recidivism risk, "black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk."  As reported by Hosanagar, a 2017 Carnegie Mellon team found that "an ad for a career-coaching service for executive positions paying more than $200,000 was shown [by Google] in 402 out of 500 male profiles and to only 60 out of 500 female ones." 

Privacy is another area of AI concern.  Algorithms are central to face recognition in ever-wider use in surveillance systems.  AI is now being developed to guess facial features from voice recordings.  Amazon filed for a patent to allow Alexa to determine your emotional state and target ads based on it.  

Hosanagar notes that "algorithmic suggestions don't come with a warning label.  Maybe it's time they did."   His argues for an "Algorithmic Bill of Rights" to address ethical issues before AI gets beyond human understanding and control - and to increase trust in it.  

The Partnership on AI, a collaboration of more than 80 developers and users, such as Google, Facebook, Human Rights Watch, the ACLU, Amazon and Amnesty International, is working on such issues as fairness, inclusivity, security, privacy, safety, and transparency.  Many are interrelated.  The more public the source code for an AI application (transparency), the easier it can be to manipulate, such as in electronic voting machines (security and privacy).

The Defense Research Projects Agency has taken on the task of helping us understand what happens in the "black box" of machine learning applications.  Their "Explainable AI" project aims to enable users to answer such questions as: why did you (AI) do that? when do you fail?  when can I trust you? and how do I correct an error?

AI will also put jobs at risk, from Uber and truck drivers to radiologists and pathologists, the latter because experiments demonstrate machine learning can train itself to read scans/slides on a par with skilled professionals. How will we help people, whose livelihoods and sense of self is tied to their careers, navigate the advance of AI? 

AI may also leave us vulnerable when it fails. Will pilots still have the requisite skills when they have become so dependent on AI that their capabilities have atrophied? Medical diagnosticians?

Cinelytic is a startup that applies AI to historical data about movie performance, themes, and talent. It offers recommendations to producers on how substituting one actor for another might impact a film's box office.  As AI advances from stories in movies to how to cast them, we can expect more AI ethical issues to move from science fiction to fact in our lives.

 Photo Credit: Darcy Moore

Student Debt Relief and Unintended Consequences

Student Debt Relief and Unintended Consequences

 Understanding the Constitution #7: It is Not Designed to Unify Us

Understanding the Constitution #7: It is Not Designed to Unify Us