Module VIII·Article II·~3 min read

The Ethics of Technology: From Bioethics to AI

Philosophy in the 21st Century

Turn this article into a podcast

Pick voices, format, length — AI generates the audio

Technology as a Moral Issue

Every technology is not just a tool: it changes relationships between people, the distribution of power, and what is considered possible and normal. Writing changed memory and authority. Printing — the information order and politics. The internet — speed, identity, privacy. Understanding these changes is the task of the ethics of technology.

Hans Jonas, in "The Imperative of Responsibility" (1979), formulated the problem: traditional ethics was an ethics of proximity (responsibility for those nearby, here and now). Technology made actions with distant and long-term consequences possible — impact on climate, genetic modifications, nuclear weapons. A "long-arm ethics" is needed.

Bioethics: Life, Death, and the Body

Bioethics emerged in the 1960s and 70s with the development of medicine: life support machines made death controllable, transplantation turned the body into a resource, reproductive technologies made birth planable. Four principles of Beauchamp and Childress: autonomy (the right of the patient to make decisions), non-maleficence, beneficence, justice.

The most difficult questions: When does life end (euthanasia)? Is human genetic engineering permissible? Who owns biological data? How to distribute scarce organs for transplantation? Each question requires balancing competing values and has no ready-made answer.

CRISPR-Cas9 made genome editing cheap and precise. In 2018, Chinese scientist He Jiankui created the first "designer children" with the edited CCR5 gene. The scientific community’s reaction was unanimously condemning: the risks are unknown, children's consent is impossible. But the discussion about acceptable boundaries — is only just beginning.

Ethics of AI: Algorithms and Fairness

Artificial intelligence generates new ethical problems. Algorithmic bias: AI systems trained on historical data reproduce historical discriminations. Amazon’s recruitment algorithm discriminated against women — because it was trained on resumes historically submitted by men. COMPAS, used in American courts to predict recidivism, demonstrated racial bias.

Responsibility and transparency: when an algorithm makes a decision about credit, medical diagnosis, finding guilt — who bears responsibility? How to explain the decision of a "black box"? The European GDPR introduces a "right to an explanation" — algorithms cannot be completely opaque.

Autonomous weapons: drones capable of independently choosing targets — are a reality. This violates the "human-in-the-loop" principle. The Geneva Conventions did not anticipate such a situation. The discussion about "killer robots" — is not science fiction.

Digital Privacy and Surveillance

Shoshana Zuboff in "The Age of Surveillance Capitalism" (2019): the big tech companies’ data model is not simply an "exchange of data for services." It is a one-sided appropriation of human experience as raw material for predicting and changing behavior. This is a new form of power, unprecedented.

For philosophy: questions of identity in the digital age — who am I, if an algorithm knows me better than I know myself? Autonomy — if my preferences are shaped by a recommendation system? Democracy — if the information environment is personalized to the point of isolation?

Question for reflection: What algorithms make or influence decisions in your professional field? Can you explain how they work and assess their fairness?

§ Act · what next