Module II·Article II·~3 min read

Ethics of Artificial Intelligence: Machine Autonomy and Human Responsibility

Applied Ethics: Dilemmas of the Contemporary World

Turn this article into a podcast

Pick voices, format, length — AI generates the audio

Why AI Requires Ethics

Artificial intelligence is not just a technology that speeds up existing processes. It is a technology that makes decisions—or imitates decision-making—in situations where people traditionally acted. When an algorithm denies a loan, recommends a sentence, or drives a car, questions arise that were previously only asked about people: who is responsible? By what principles is the decision made? How can fairness be ensured?

Central ethical problems of AI:

Algorithmic bias: machine learning systems are trained on data reflecting historical inequities. If historically certain groups were more often found guilty, the algorithm will "learn" to treat these groups in the same way—reproducing rather than correcting injustice. The COMPAS system (recidivism risk assessment) demonstrated statistically significant bias against African Americans—these are data from real US judicial proceedings.

Transparency and explainability: the "black box" of neural networks is opaque. When an algorithm denies a loan, the patient or client has the right to know why. The European GDPR establishes a "right to explanation." But technical explainability of complex models is an unsolved issue.

The problem of responsibility: if a self-driving car hits a pedestrian—who is at fault? The manufacturer? The programmer? The owner? The algorithm is not a moral subject. Yet traditional legal concepts of responsibility are created for moral subjects—humans.

The Trolley Dilemma in Autonomous Transport

The classic thought experiment—the trolley dilemma—has acquired a new dimension with autonomous vehicles. A self-driving car loses control: it can either go straight and hit five pedestrians, or swerve and hit one. A human operates the trolley and reacts instinctively. The car follows a pre-programmed algorithm.

This means: someone must decide in advance how the car will behave in such situations. This is an ethical decision made by engineers and regulators. The MIT Media Lab conducted the "Moral Machine"—a survey of millions of people from 233 countries about their preferences in such situations. Results: significant cultural differences in whose life should be considered a priority.

Autonomous Weapons

Lethal autonomous weapon systems (LAWS)—"killer robots"—have become one of the most acute ethical issues in AI. Arguments for development: they do not tire, do not panic, do not make emotional mistakes; they can minimize losses among their own. Arguments against: making decisions about life and death requires a moral subject; delegating this decision to a machine is an abdication of human responsibility; lowering the "threshold" for starting a war if machines will fight.

International humanitarian law requires distinguishing between combatants and civilians, the principle of proportionality, the principle of precaution. Can machines apply these norms? This is an open question.

Principles of AI Ethics

Leading organizations (OECD, EU, IEEE, Google, Microsoft) have developed principles of AI ethics with different emphases but a basic consensus:

  • Fairness: AI systems must not discriminate on protected attributes
  • Transparency: decisions must be explainable
  • Accountability: there must be mechanisms of responsibility for AI decisions
  • Privacy: data are used with consent and are protected
  • Safety: systems are reliable and do no harm
  • Human control: in critical decisions, a human must remain in control

The transition from principles to practice is the main challenge. "Responsible AI" so far is more marketing than a standard with verification mechanisms.

§ Act · what next