Ethical Issues With Autonomous Vehicles
Driverless cars are the wave of the future. Many people are looking forward to the day where they can chill in the passenger seat of their vehicle and have the car do all the driving, especially in high-traffic areas.
However, these self-driving vehicles use artificial intelligence, and while these vehicles are smart, they’re not flawless. What many people may not know is that their intelligence is based on the morals and ethics of professional engineers, so there’s a possibility that these vehicles could be trained to do something bad. If a professional engineer engages in activity that could harm the public, they could face license loss and other penalties.
There are many scenarios that come to mind. For example, if an autonomous vehicle is about to hit a vehicle with several people inside, it can veer off the road to avoid it, but there could be pedestrians there. There are two frames of mind to consider: being selfish (protecting the vehicle and its occupants) or being utilitarian (harming the fewest number of people).
But no matter which option the professional engineer chooses, it could come with consequences. The way that professional engineers approach ethics is through oversimplification. However, morals are very complex. Not every situation can be ethically covered through simplistic programming. Autonomous vehicles do not understand malicious intent, but they should.
Agent–Deed–Consequence (ADC) Model
Experts recommend using a certain type of model as a framework for autonomous vehicles to make moral judgments. The Agent–Deed–Consequence model uses three variables to determine the morality of a specific situation.
The first variable is intent: is it good or bad? The second variable involves the deed in general: is it good or bad? The final variable involves the final outcome: is it a good or bad consequence?
An example is running a red light. Generally, running a red light is against the law, so many people would consider that bad. However, what if you ran a red light to avoid a collision? Wouldn’t that be a good outcome?
Human judgment, however, is both stable and flexible. For example, lying is generally considered to be bad, but there are situations in which lying could be helpful.
In any case, more research on ethics is needed before we can move forward with autonomous vehicles. Philosophers and the general public approach morality in different ways, so more studies are needed, especially in terms of virtual reality. There also needs to be more testing with driving simulation studies to prevent vehicle terror attacks. Professional engineers need to ensure that their vehicles are not used for malicious purposes.
Keep Your License With Help From a Tampa Professional Engineers Licensing Lawyer
Professional engineers have a huge responsibility. They must create new innovations while keeping the public safe. This is not always an easy task.
If you are facing licensing issues as a professional engineer, get help from Tampa professional engineers licensing lawyer David P. Rankin. He has represented more than 100 Professional Engineers before the Florida Board of Professional Engineers. Schedule a consultation today by filling out the online form or calling (813) 968-6633.