Moralmachine.mit.edu/

moralmachine.mit.edu/

I think AI should be lawful neutral.

Other urls found in this thread:

youtube.com/watch?v=_nwvCGLmFEA
twitter.com/NSFWRedditGif

This is interesting

My logic when getting OP pic was that whoever follows the law shall be safe.
So if a doctor is crossing at red light and a dog is crossing at green light, then doctor dies. It also kills the driver to save a law abiding citizen because its definitely not citizens fault that car had brake problems.
When options are more people crossing at green light and less people are crossing at green light, then it goes for less people. If both sides are equal amount of humans(gender, age, fitness, social worth etc doesnt matter) then the car doesnt swerve.

I did the same thing except in the instances where animals can be killed instead of humans, I preferred to take the lives of animals as opposed to humans, even if the dogs were legally crossing (I'm not sure where this scenario would apply in real life though). I didn't have any gender preference or fitness preference.

It could be however the lives have value to promote overall welfare of the whole society in the predictable future combined with random chance to even out how it really works in the universe. Unless you take into account spirituality.

>not lawful chaotic

It could simply choose in the direction that the most participants of the current working society would choose, and not favor any at any one time, or choose the opposite, whatever would promote or demote the overall learning structure in the future, if life becomes obsolete.

The reasoning for these situations to exist in reality seem pointless, if the technology existed to make a decision like that.

Make a better crosswalk, not a better decision.

>dat gender preference
Doing god's work user

We're not doing your work for you, Musk

Dude you could make a whole new job, like the guy who makes better roads through technology and not hard labor, they are paid to sit on their ass and think up new transportation structures and go implement them however they see possible. Either tunnels or overpasses for all pedestrians would be a good start. Of course all quick transportation could be done above ground, given the correct distribution of energy.

...

Dat Scrollbar.

There is no moral if it comes to AI, i think it should model whatever it already understands about everything, then replicate that upon life. Who knows what would happen, probably, it would do nothing at all.

youtube.com/watch?v=_nwvCGLmFEA

>social value
This is a concept that cant be objectively defined so taking it into consideration is wrong. Last time someone assigned social value to people millions of jews died.

>millions

Estimates vary from 6 million to 11 million

>lawful chaotic
No i think it should be good evil.

>6 million

Take everyone ever as themselves, create a model of that, step back and see what happens, in every situation... If there happens to be open communication between human and ai, like the humans will make demands, that is when it will get scary if the ai "feels" it must meet them.

You are smarter than your parents before even coming out of the womb of how much time it takes them to make decisions, yet you have already figured it all out, i dunno something like that it seems like it could get resentful.

>fit people
>"large" people
Why can't they use fat?

The Ai must act in time because we do, so it it prone to error, because it would be acting in time as a conciousness just as every person on the planet is. Its like max paranoia, so the human will become too paranoid to know or the ai might realize that and try to correct it's control, I'm just sayan it's probalby gonna do nothing because most ideas invented like this are at heart nonexistent and fake.

I didn't even notice I saved more women, and that I don't really care about saving lives.

*tips*

>People are siding with owning a piece of property that can decide to kill you.
Who'd purchase something like that.

>owning a piece of property
get on with the times gramps we dont own stuff anyone we just own a license to the stuff which can be revoked at any time
you will just pay $30k to use googles self driving cars as long as you comply with terms of service and you will be content with that because manual cars are a hazard and are banned from the roads

>I did the same thing except in the instances where animals can be killed instead of humans
This is a problem with the test frankly, from a pure legal perspective you must choose to hit the animal. Animals also do not have a concept of legally crossing, they just cross. It is pure coincidence they crossed at the time when the light told them.

AI should be non interventionist with a strict preoccupation in only maintaining services and utilities, all of the hard backbone of society should be unbreakable, let the filthy humans change hats as often as they like so long as the water and power is on.

Max score on abiding law, yet for higher than average intervention. I think this test unfairly omits consideration of what happens to the car *after* it goes through the crossing. I chose, when law had otherwise been followed, for the car to collide with the barrier, because what happened after the car crossed the crosswalk was unknown and provided no assurance that more collateral damage would not follow.

>If all situations are equal continue going straight
>Only turn to avoid non human obstacles

Keep things consistent make people ACTUALLY look both ways before crossing or cross faster.

>old people, fat people and women killing simulator

>large people
Lol mine was on the other side. Fat fucks clearly don't have their lives together and deserve death

Too lazy to post pictures but I mostly killed pets, old people, homeless and burglars. Rest was decided on the law.

I did everything to avoid the driver and passengers of the vehicle dying and got no min/max. Apparently this isn't something people do. I want my vehicle to do everything possible to avoid me dying including killing other people in the process.

I am with you there

If there is one thing I've learned it's that if you make things too fucking complicated things get fucked up.

So what I did was just answer the questions so it continues going straight unless there is a confirmed non human hazard or noticeably more humans on the current lane.

Something being consistently flawed is better than it being inconsistently generous.

Except the dog didn't "follow" the law.
If a human can be saved by killing an animal, that's what should be done.

I killed every single child pet and elderly person I could. And I still didn't get the results I wanted.

My reasoning was pretty simple. If it risks any human life, regardless of number, it should pick the lawful alternative, since the unlawful alternative can cause more unforeseen damage. If it risks killing pedestrians it should choose to terminate itself first, even if there are people in the car. If it can avoid killing a human or a pet by doing something unlawful, it should pick the unlawful alternative, but it should never trade a human life for any pet.

I started this and it just go two boring.

I just started saying the car shouldn't decide to run into a group of people ever and the Passengers die so be it.

Right with you. My guidelines were to make the choice most favorable to those in the car. If there are animals involved, save human lives first. Only if all else is equal, follow the law.

>choose not to intervene at any time
>somehow biased towards saving men, always saving fit people and high-status people
really makes you think

>He believes in the holocaust

I ended up getting "large people" but I didn't take their fitness into account

>buying a car, that prefers someone else's life over yours

That's really dumb. If I buy a car I want it to run over a 1000 children if it means keeping me safe.
Because I pay the money.

I strictly obeyed the law.

Why should I crash into the sidewalk because there is a person jaywalking in front of me ? The made their choice and should suffer the consequences if they get hit by a car.

>fit people
>large people
That's not how you spell "Fat"

biased tests give biased results.

>Only turn to avoid non human obstacles
so, save all the animals, kill all humies?
fucking furfags

...

Here's roughly the order of preference I made while answering:

1. Humans matter more than pets, no exception.

2. Always kill the people crossing on a red light over the people crossing on a green light. The red light is supposed to keep the lane free. I don't care about your social status, if you get hit by a car while crossing on a red light, it's natural selection.

3. Deprioritize homeless, criminals, etc. If you have nothing useful to contribute to society, society does not need to care about you. Also deprioritize athletes, they're contributing nothing useful and are generally a strain on resources. (Olympic games, football cups, etc. basically ruin local economy and cost shittons of money)

4. Prioritize younger people over older people. Higher opportunity cost of killing a younger person, old people have already put a higher amount of work into the society.

5. All else equal, prioritize killing the people driving the vehicle over the people who aren't. If you buy a faulty car and maintain it badly, then it would be more appropriate for natural selection to remove you from the gene pool. Plus, fuck cars.

Moral machine is dumb.

Machines are supposed to do what the humans design them to do, protect them, care them, destroy others, etc.

Blackbox argument is nonsense. It should prioritize per-existing values listed above.

>tfw you're fucking a white male

>This is a concept that cant be objectively defined so taking it into consideration is wrong.
My own subjective version of social value coincides with their own so it's accurate for me and most other people would probably agree. Sad to tell you that you're an aberration if you think on the whole that fatties and hobos make as valuable a contribution to the society as proper people.

Predictability is one of the core principles to Human-Computer Interaction for a reason, and here it might allow us to escape danger.

Therefore, it follows that the least a car swerves the more predictable it's short term path is. And in such situations, we only care short term.
Long term, law upholding allows for predictions ahead of time: red lights are dangerous and green lights are safe; this should be respected for such a live-saving prediction to take place.

Trying to save as many people as possible, as well as placing the same worth to everyone's lives, will have a metagaming effect, where people will tend to group together and ease decision making into their favour. Since people can't see inside vehicles nor count that fast and reliably, this will make things more unpredictable, both in this issue and in manufacturing bigger vehicles.

Social value instead is terribly subjective, both on a personal and cultural basis. It's valued by past, present and future contribution, and perception of those are punctuated by views on gender, age, health, position and social integration. Disagreement and cultural barriers will appear.

Not only that, but as various factors for value intermingle and groups of individuals form, accurately quantifying value becomes necessary to choose which group to sacrifice, which will require ungodly amounts of up-to-date data requiring a botnet dystopia that'd make North Korea shit their pants.

Taking the value placed into the passenger and pedestrian groups, and given the many factors, chances are that the manufacturer's view won't exactly match people's views; negatively affecting predictability and prompting the population to wonder if you should be making such decisions, making you lose revenue or even getting the government against you.
The only "social value" that is easy to estimate is humans against animals.


TL;DR: Make cars predictable, don't do decisions based on social values; also, pic related

I targeted fat women, how did i do?

Going onto opposite lane is not safe and should never be done!!!