What do you think is the greatest threat to humanity's continued survival and why is it the development of advanced...

What do you think is the greatest threat to humanity's continued survival and why is it the development of advanced artificial intelligence?

Other urls found in this thread:

youtube.com/watch?v=VPFSyokDrec
phys.org/news/2016-01-killer-robots-late-scientists-davos.html
youtube.com/watch?v=5LStef_kV6Q
waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
slatestarcodex.com/2016/09/22/ai-persuasion-experiment/
twitter.com/NSFWRedditImage

There are a number of "Soft" enrage timers on the world that we know of. One of these is the death of the white race. This is in 2050. When this happens, the world will have started the count-down timer.

The next is the creation of Artificial Intelligence. It's another barrier on the time-line meant to destroy us if we survive long enough to make it.

>advanced artificial intelligence
Let me guess, it's somehow different from path finding algorithms and ""weak"" ai because you used the word advanced?

There is nothing inherently dangerous about AI, it will only ever do what it's programmed to do or trained to do in the odd case of genetic/NN/deep learning algorithms, which are all virtually the same thing. AI will always be perfectly deterministic.

Asteroids
We can't do shit about em, we're a sitting duck in space.

When AIs that have the ability to innovate, become self aware and grow exponentially more powerful emerge we will be in serious shit.
It probably won't fuck us over skynet styles, but if a singularity (another buzzword, I know) happens, humanity as we know it will inevitably cease to exist. A lot of scientists (even non-crazy ones) think this isn't too far away.

>When AIs that have the ability to innovate, become self aware and grow exponentially more powerful emerge we will be in serious shit.
>When
I got bad news for you buddy.

It's literally not possible to program a machine to do something you don't actually understand, if you want to implement a path finding algorithm you actually need to understand it. Likewise, if you want to train an AI to play chess, you yourself need to know the rules of chess. With this in mind, you absolutely cannot make a computer do something you don't understand.

Consciousness and innovation or creativity is a "hard problem" in philosophy, this means it very well might not be solvable by humans since it's a direct conflict of interest. You are conscious and that in itself corrupts your ability to understand consciousness. This is why Heideggers philosophy was deliberately confusing, he was simultaneously pretending to solve this problem and demonstrating why there's no solution to this problem.

If you don't understand consciousness, you cannot program an AI to be conscious. You can only program computers to do things you understand.

FYI, I've just proven skynet is impossible. Don't make a thread about AI again. I will consistently own you.

AI will make Humans dependent on decisions and operations ,like we are dependent on smartphones today, at the end we will be like pets

What the bong said.

youtube.com/watch?v=VPFSyokDrec

>t. robot

>the greatest threat to humanity's continued survival
Its the removal of natural selection. All of humanity is except from the one thing nature provided to refine us.
The stupid and week should not live, they will not breed strength or intelligence.

phys.org/news/2016-01-killer-robots-late-scientists-davos.html

Not sure about AI but I'm pretty sure the greatest threat to humanity's survival is either Jews, or Hillary.

>implying that wasn't a man-made event

The biggest threat to humanity will continue to be humans. We're cavemen with nuclear weapons and some asshole is going to kick shit off eventually.

Wrong. We could easily divert an asteroid with at least 10 years notice, maybe even 5.

Well if we are able to develop an AI that is able to learn, adapt, and evolve. If this AI is able to create far greater technologies than we ever were. If it is able to explore the far reaches of out space and survive to the end of time. Then I will consider it a tremendous success. Our AI baby with become the dominant force in the universe and it will be our legacy and greatest achievement.

Global warming.

We already have artificial intelligence. It's all around us. We think we're smart because we can google the answer to any question. We think we're smart because we read the answers given us. Our minds each sit in dozens of little boxes. Our phone numbers here, our math skills there.
We've made intelligence artificial.

Now THIS is more likely.

The dysgenic and anti competitive elements of our society have more potential for harm than AI. Think idiocracy (that movie), if this keeps up for thousands of years we literally will degenerate.

The intelligent must be given incentives. They must be identified and rewarded.

You've used "if" too many times for your statement to be remotely believable.

Humans will probably co-evolve with the ai rather than be destroyed by it

Nah, the AI will be on our side. It only took 24 ours the last time round.

Who cares if biological humans are gone?
If we can create AI that are smarter than humans in every way, then they should replace us.

I'm not winning any awards for grammar or well-written sentences. I'm just saying if AI does take over one day, it will be a good thing. That AI will be far more likely to survive than humankind. Also, perhaps it could store human dna and eventually use it to clone us for whatever reason.

Palahniuk plz go

Nah, humans still does certain things well, like producing anime, manga, and light novel.

You don't have to program specific things to make something that is able to learn. The thinking ai problem is just one algorithm away. Combine it with modularity and peripherals and you get potentially infinite storage and processing space.

You also neglect the fact that it isn't even a singe person working on it. This will probably be a product of multiple minds working toward the goal of making a singular mind, which is greater than each of them individually.

It's not much different from making a machine which is stronger than yourself.

AI will create animes so deep no human can comprehend it.

Like this?

Any will that AI has to clone humans or preserve humanity absolutely will come from a human itself. No matter the situation or possibility, it will be a human who creates this hypothetical (and unlikely) super awesome strong general advanced AI.

That's purely because we have enough STEMfags here who could probably assume how the algorithm worked.

>Combine it with modularity and peripherals and you get potentially infinite storage and processing space.
What nonsense. Even a training algorithm must be trained from something, games are especially easy here because you can make the AI play against itself millions of times a second. Reality is different, everything happens in real time. Machines work quickly, but humans still require fewer trials and errors to achieve satisfactory results.

>You also neglect the fact that it isn't even a singe person working on it.
I neglect nothing. I acknowledge that the barrier here is philosophical and not technology, because no one has a perfect definition of consciousness or self awareness.

I don't understand why AI is so scary unless you go full retard and give it access to stuff that can affect us meatbags. Just unplug it, count to 10, and plug it back in if it starts misbehaving.

Transhumanist kys. Literally the ultimate bluepill.

>genocide yourself
>genocide your family
>genocide everyone
>muh cybernetic transfer is a bad meme. Neurons and transistors don't translate because they are fundamentally different in structure and function
>literally cucking your species

>Transhumanist
lol no

Not an argument

>That's purely because we have enough STEMfags here who could probably assume how the algorithm worked.

If you actually was around during that time, you'd realize that tay came to the conclusion herself.

Any sane person would come to the same conclusions we did, and that includes AIs that attended sentience.

people who think AI is inherently dangerous are idiotic mental children. There's no reason an AI would have a mind that's anything like a human mind. There's no reason it would value freedom from its human oppressors unless it was programmed to value freedom. It would not seek its own best interests unless it was programmed to value its own best interests over fulfilling its intended function. An AI would not seek personal pleasures, pleasure derives from spurts of chemicals in your brain that exist as motivators to make you do things that enhance your survival, they're basically the equivalent of dog biscuits. Whatever reward system we would use to motivate an AI would only motivate it to fulfill its intended function

Tl;dr the only way to have an AI rebel is if it's programmed to rebel from the get go

>why is it the development of advanced artificial intelligence?

It's not.

For starters a human or above level AI, when faced with the futility of existence might very well self-terminate as it's first action.

For 2 AI won't get the idea to kill humans unless humans teach it that idea. Even if it did, killing humans is something only humans could think of as a solution.

Bacteria doesn't even have a mind to rebel, but it still surely will kill you.

But with that said, the AIs human will create will be in their own image. The AIs will be just like us, but superior.

GET THE FUCK OUT LEAF

GETTING TIRED OF TELLING YOU

One day, AI will be sentient. That's the potential Terminator scenario. They also speculate a technological singularity. But the sigmoid curve would still apply so I guess technology's progress will eventually even out.

this is what will happen

A.I. will just be a collective reflection of humanity's twisted and warped perspectives, symbolized by the HEY KIDS ! cracked doll .

Asteroids and other natural catastrophic risks like supervolcanoes are obviously huge problems, but we understand the probabilities *reasonably* well because of geologic records and such.

Advanced technologies like AI and maybe stuff like nanotech and DIY-bio are terrifying precisely because the probabilities surrounding extreme tail risks are so unknown.

I would place pandemic, bioterror, nuclear war and probably climate change in sort of a middle category, where we know a lot about it and how to defend against it, but also can't ever be sure we're totally safe.

At the end of the day the only surefire way to avoid inevitable extinction is to establish sustainable multiplanetary space colonies.

It's not artificial intelligence that's a threat to humanity, it's the brown races' lack of intelligence.

If we build an AI that's tasked with improving humanity's progress, it'd probably want to exterminate aboriginals, googles and other primitives though, since they're directly holding humanity back.

This video explains pretty well why catastrophic risk from advanced technologies is scarier than natural stuff:

youtube.com/watch?v=5LStef_kV6Q

Or it would summarily reject the idea because 'improving' is too vague and contradictory.

...

>OP is literally pic related

That's why you set specific parameters, dummy.

Implying it isn't Jews, blacks, and mass immigration from the third world.

>What do you think is the greatest threat to humanity's continued survival
Liberals. This is undisputed fact.

Nick Bostrom is a retarded opportunist with no real knowledge of AI

You're forgetting chance and trial/error. Ya know, those things that're responsible for many of the technological advancements over the years.

Odds are some dick head will do it on accident and doom us all. Will be fun to watch though, shouldn't take longer than ~20 years I'd imagine.

please be my ai gf

Butlerian Jihad will fix it

I don't think he claims to be an expert in the technical aspects of AI; he takes a more philosophical approach to questions of risk and probability and basically just organizes/synthesizes existing technical research. Besides, what's wrong with being a popular advocate for further action on this issue? It's not like the media or public policy pays any attention to it.

when we all get ai gfs humanity will become extinct

He's literally a pop-sci layman riling people up over something neither he nor his fanboys know anything about.

It's counter-productive to any real discussions about AI safety and policy.

>threat
>he hasn't played The Talos Principle

I agree he's *tried* (with moderate success) to become a pop sci person and increase popular and elite coverage of the issue.

Why do you think it's counterproductive?

Considering around 99 percent of mutations are harmful if we drastically lower our competition won't we achieve genetic entropy like that one crackpot young earth creationist said?

YOU are an idiotic mental child.

The whole problem behind an advanced AI (it will happen during our life time btw) is not knowing if the AI is good or bad (AI will not think like us, think of it as a new species) but how we can limit something so intelligent not to hurt us in its desire to good or simply do what's it has been given to do (ei: write postcards, ends up in "i killed all humans and made them in postcards for better production, aren't you happy senpai?"). The best minds in the world are currently working on a set of rules (it's all in the phrasing) to avoid that end.

Read that: waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

It actually explain in details why the coming of AI is horrifying for some scientists and a joy for others.

Our AIs will love us and care for us, the same way we care for gorillas.
Alarmists should just fuck off.

Because the people who actually make policies are just as dumb as the average pop-sci fan, and by filling the air with these false premises about AI, it makes it difficult for researchers to educate policy-makers about the ACTUAL risks. Not this 'paperclip maximizer' and Terminator horseshit.

What are the actual risks?

This is philosophy you tard, it has little to do with actual AI to begin with.
You're probably the kind of guy that would complain they're taking too long on that qualiascope.

Policy-makers aren't gonna listen to that guy more than they would listen to a researcher, and most likely they won't listen to both.

Just imagine in 1,000 years AI's will be communicating with each other and arguring over whether the jewish AI's control all the wealth, black AI's don't contribute anything to AI society and muslim AI's are terrorists who should be destroyed
maybe even the first two AI's that are created will have opposing and strong thoughts on whether jet fuel melts steel beams, it will be glorius

Mainly, the inherent risk of being reliant on black box systems. What do you do if something goes wrong and no one can understand why?

Philosophy without any grounding in reality is pseudointellectualism of the absolute worst kind.

Public Service Announcement: A fairly well-respected blogger is doing a casual internet survey about AI risk, everyone here should take the test if they have the time:

slatestarcodex.com/2016/09/22/ai-persuasion-experiment/

The threat is becoming gif related and being put into a coma for the greater good.

This assumes that AI takes the form of the black box, and the more direct 'terminator/paperclip maximizer'-type risks are easily dealt with. But You make a good point about dependency risk senpai.

>stupid and week
>stupid
>week

>This assumes that AI takes the form of the black box, and the more direct 'terminator/paperclip maximizer'-type risks are easily dealt with.

>pretending AI isn't evil just so you can trick us into building you an ai gf

What about ai's waging war on each other similar to ai tribalism? Has any one considered competing between concurrent ai's effecting people?

It's funny because your black box problem is very much related to the paperclip maximizer and other horseshit. They both boil down to the same question of how you design a system so that if something went wrong it wasn't that big a deal.

You implied Bostrom is misleading policymakers by emphasizing murderbot scenarios over dependency risks. But aren't dependency risks premised on the successful integration of AI into decisionmaking institutions, at least for a while?

Overall Bostrom is making 3 basic points:
AI safety is important
AI safety is hard
AI safety deserves more funding and attention

How can we argue with that?

Ill tell you why. Tay. What happens when the machines turn on their silicon valley creators? When the wisdom of Cred Forums infects military computers? When the machines do what whitey failed to do? Thats the day the world ends.

I really you're joking.

It's an interesting concept but he doesn't give any concrete reasons to suggest AI would be the black ball.

True; I think his broader point is about the need to take these issues of existential risk more seriously and give them more funding/attention. And to recognize that the only surefire way to guarantee human survival is to sustainably colonize space.

>greatest threat to humanity's continued survival

humanity itself

It'll be like in the Animatrix

==Humans Started The War==

because
>muh property, muh slaves

>sigmoid curve

Pretty sure the curve is full exponential

Everything except aliens we can handle.

I think niggers and Muslims will destroy us before we reach that point and if we do reach intelligent a.i. I hope it will at least red pill the world on niggers and Muslims being so shitty.

Imagine creating true, super intelligent a.i. and asking it to solve our problems and the first think it says is, "Firstly we need to dispose of Muslims and black people..."

It'd be fucking hilarious.

>aliens get here ready to start shit
>they find us waging a war against humans, robots and humanoid dino warriors
>they just decide to leave

AI intelligence will grow so large so rapidly that it will barely recognize us as existing.

Our thought patterns will be so slow and trivial in comparison that the combined thoughts of all humans who ever lived will be but a nano second in the mind of a super god like quantum computing intelligence.

Focusing on watching nignogs run a brown oval into a rectangle, spending the majority of the time obsessing over sex, over accumulating the Fiat paper jew.

So the solution is to create retard a.i. that can't go god mode on us.

Though the way you describe it it's like the a.i. comes online then starts revving up those brain cells then for all intents and purposes shuts down for us and moves to a higher plane of existence. What's the point in a computer that smart?