Why The Moral Machine Is Immoral

Introduction
The Moral Machine, developed by MIT, serves as a simulator designed to explore ethical dilemmas faced by self-driving cars. In situations where an accident is unavoidable, this machine must choose between two tragic outcomes: causing harm to pedestrians or to the passengers it carries. However, this framework is not only irrelevant but fundamentally flawed in understanding the operational principles of self-driving cars, which are trained to handle real-world scenarios differently.

The Realistic Function of Self-Driving Cars
From an engineering perspective, the Moral Machine relies on outdated principles that depict self-driving cars as systems operating on predefined ethical decisions, a stark contrast to how decisions are actually made in both human and machine-driven contexts. In reality, self-driving cars use neural networks that mimic the pattern recognition and decision-making processes of humans, albeit without consciousness. When faced with critical situations, humans do not follow a scripted set of ethical directives; instead, they make split-second decisions based on the context and their instincts, which are honed by experience and varied inputs over time. Similarly, self-driving cars process real-time data from their environment to make the safest possible decision in the moment. This approach reflects a key aspect of moral behavior—adaptability and responsiveness to the immediate situation, rather than adherence to rigid, pre-determined rules. By designing cars that simulate this aspect of human decision-making, engineers aim to create systems that can handle unforeseen circumstances with a level of flexibility and safety that mimics human judgment, without attempting to codify morality into absolutes. This not only aligns more closely with how humans actually operate vehicles but also sidesteps the ethically fraught territory of programming specific value judgments into machines.

We should leave it there.

The Ethical Implications of Determining Social Worth
No programmer, executive, or legislator should have the authority to assign a value to human life. Decisions about the worth of a life, when encoded into algorithms, can lead to severe biases and misuse. The very idea of a machine learning someone's value through algorithms and limited data sets a dangerous precedent. This is not about discomfort in making life-or-death decisions; it’s about the fundamental inappropriateness of such power being concentrated in the hands of a few, or worse, coded as an absolute into software, especially over something as unsuited as machine vision. If implement such a system for cars, what does this mean when AI comes to dictate more and more of our lives? We have to stop this precedent today.

Consumer Perspective
Imagine a scenario where self-driving car manufacturers disclose their decision-making algorithms in an effort to demonstrate transparency and eliminate biases. Would you purchasing a vehicle programmed to prioritize other lives over yours in a crash? Being transparent about decision-making algorithms not alleviate the core issue but rather highlights the impracticality and potential commercial unviability of such ethical programming in consumer vehicles.

Conclusion
The Moral Machine presents an unrealistic and ethically questionable scenario that distorts the practical and moral landscape of self-driving technology. By forcing us to make absolute decisions about the worth of lives based on simplistic scenarios, it detracts from the real challenges and advances in autonomous vehicle technology. The context of the problem forces us to make decisions beforehand, playing as a sort of God, with people's lives. I'm not sure what the right strategy is, but it's certainly not comparing the value of life over machine vision.