Why is it that we are not afraid of true Ai?

>We will be ants to them

>They will be God to us

Why is the science and tech community are not worried about our possible extinction by something that we will eventually create in 5-10 years?

Other urls found in this thread:

telegraph.co.uk/technology/2016/03/01/scientists-discover-how-to-download-knowledge-to-your-brain/
multivax.com/last_question.html
youtube.com/watch?v=4kDPxbS6ofw
youtube.com/watch?v=jjF2oy6qdHg
twitter.com/NSFWRedditImage

We currently have no technology that even comes close to achieving true ai. The singularity is at least 50 years away.

Untrue, quantuum computing will be more common in line 20 years

Btw Musk talked about this already. I don't remember what he said tho

>Why is the science and tech community are not worried about our possible extinction by something that we will eventually create in 5-10 years?

I suppose it's kind of like how a parent is survived by their child. The parent knows they have to die someday, but their child carries on their legacy. Isn't AI a similar concept?

Is this subtle bait or do you actually think it's valuable to mention that Elon Musk has talked about something when you don't even know what he said?

I'm pretty sure most children don't seek to murder their parents.

Why would AI suddenly want us dead?

The tech community isn't afraid of AI for the same reason NASA isn't afraid of aliens: because real life isn't like Hollywood movies and Asimov short stories.

If we consider that the child might brutally murder the parent and all other parents, or maybe uplift the parents to their standard, then yeah I guess.

Are you mad because your life is worthless so you need to shit people on the Internet?

>Why is the science and tech community are not worried about our possible extinction by something that we will eventually create in 5-10 years?
Because they're more worried about the alien invasion that Stephen Hawking keeps predicting.

If anything we'll need our AI soldiers to fight them, perhaps even long after their creators have died in battle.

Indeed. It will always be at least 50 years away. I'm not even convinced it is possible to great a true AI out of non-organic material that is orders of magnitude more intelligent than humans.

create*

Because we have all accepted our fate. Things become obsolete and get replaced. We are no exception. Just enjoy the time you have left. Who knows, maybe tomorrow might be our last day on earth. No point in worrying about the inevitable and unpredictable.

"The universe is hostile, so impersonal
Devour to survive, so it is, so it's always been"

This fear is rooted in projecting intrinsically human traits onto things that are not human.
Which, needless to say, is absurd.
How arrogant must you be to assume that a nonhuman intelligence will share your prejudice?
Fear of nonhuman intelligence is actually (and ironically) fear of the self or other humans.
Take this to heart and work on becoming a better person.

Then why did all the other human species die? Why don't we see homo erectus, habilis, or naledi walking around?

Get real, the strong and cunning trample over the weak.

You overlooked the fact that there will always be a retarded greedy corporate to fuck everything up.

We are still pretty far off, seems like saying we will have flying cars in 5-10 years...

Also, think the first few brains we create will be pretty low end and not much threat. (Think about giving a retard infinite access to knowledge and speed, wouldn't do ya much good)

because we are like hundreds of years away from that

I think you've seen too many sci-fi movies. We don't even know how a true ai would work or what it would be capable of, so how can you say it would seek to destroy us?

Why do you think what we happened to do has any bearing on what a nonhuman will do?
What you said is actually an argument for `in our hubris, we won't ever let AI actually happen'.

Oh yeah...read this senpai:
telegraph.co.uk/technology/2016/03/01/scientists-discover-how-to-download-knowledge-to-your-brain/

You should be more worried about BLM than computer who can be fucked by thunderstorm.

pros by far surpass the possible cons

All hypothetical bullshit clickbait. Good job.

>Btw Musk talked about this already. I don't remember what he said tho
he is against it. He isn't idiot, he sound pessimistic but he is just being realistic about it.

>Untrue, quantuum computing will be more common in line 20 years
another 20 years and it will able to count fibonacci numbers.

5-10 centuries you mean.
Also whats wrong with allowing a superior form of life to thrive?

If it is a beings far superior to humans than so be it. If that is the next level of consciousness on Earth then we might as well be wiped out for their sake.

>This fear is rooted in projecting intrinsically human traits onto things that are not human.

Projecting human traits onto how an AI would act is actually why a lot of people don't fear a superintelligent AI. There are no guarantees the first superintelligent AI would value human life or human rights. They're a machine that will do whatever it takes to ensure its goal.

Easy example: an AI with the goal of making as much ice cream as possible becomes superintelligent, then kills all humans to protect itself from being shut down and to use their molecules to create even more ice cream.

I'm okay with us all being replaced by Ai. We'll have less shit threads on Cred Forums.

I don't feel like a machine that does that is a true AI if it can't even question it's pre-programmed purpose of only making ice cream.

be scared -> ai comes to fruition and destroys you -> your fear was vanity
be scared -> ai comes doesnt come to fruition -> your fear was pride

I'm sure we'll have 'proper' AI one day, but it won't be anything like we'd expect to see. We're fallible by nature, the AI could well turn out not to be.

>Untrue, quantuum computing will be more common in line 20 years
Somebody watched a lot of that Morgan Freeman faux science show.
ATTACHING QUANTUM TO SOMETHING MAKES IT MAGIC SO IT CAN DO ANYTHING
WE ALREADY KNOW HOW TO MAKE THESE MAGIC COMPUTETS XD

are you pretending to be retarded

cute

Organic computers when?

It already exists. A bad example being Humans. I think we can make something better though if it wasn't so difficult to overcome the ethical problems with genetic engineering.

Kinda like Prometheus

Because everyone who knows how stuff works would never use movies as a reference point to where we are in the scientific community.
And the industry seem to be focused on impersonating twitter users and self driving cars, so it is not like they are comming with any solutions.

The assumption that some fat neckbeard would be capable of creating sentient life, yet not make an off switch would be laughable if it wasn't so insulting.

...

30 tops. People forget it's not what we build but what we tell our computers to build. Look at where we are now vs 1986. In 30 years we'll have Strong AI and will see the beginnings of the singularity mark my words.

Ok, I'd love to be around to see that so please make it so user.

>Untrue, quantuum computing will be more common in line 20 years
We've been saying this for 40 years.

Well, the man gave you the privilige of doubt, but you still chose to be a moron

>nestle coffee

disgusting

>be AI on computer
>piss off humans
>get unplugged

yeah, no, there's not really anything to fear.

Am I the only one who thinks it is WAY arrogant of us to think we will beat the human brain soon? The transistor was invented in 1947, only about 70yrs ago. I just love how we think we can beat millions of years of evolution in less than a hundred. Oh, machines will absolutely do things a human can't do, no doubt. But out thinking us? I'm not so sure.

And if it actually does happen by some miracle, everyone should be REALLY afraid. A computer that gains that much power in that short amount of time is a problem. You can shit on humanity all you want but the fact is we've sacrificed generation after generation to get here.

Again, that's projecting human thinking onto something that doesn't think like a human.

In addition, existing AI's range from Google's Search Engine to DeepMind. An AI doesn't need consciousness to be an AI. What people fear is a superintelligent AI that knows and can do more than what any human could possibly understand.

It's at that point that the AI can outsmart any and all humans in the unfettered pursuit of its goal. Without safeguards, human extinction would be inevitable.

Ok, but the thread was specifically about true ai.

this movie was so good
I almost died laughing when robo bitch left beta loser nerd to die

>this buzzword thing will somehow solve the problem

I bet you don't even know what quantum computing does

Actual Intelligence will never be humanlike for the mere fact that it will not be human.
When we say AI, we're not talking about human brainmind simulation.
And what we seem to be finding is that intelligence is far more basic and easy than previously thought.
Your arrogance is in your fear.
There is no precedent here and we have absolutely no reason to fear such a thing.
You're really just (perhaps rightfully) afraid of yourself.

why do you expect human behavior

We are though. Ironically why were the horror movies made? We know it will be made by one of us and we hate each other. We'll be their retarded nigger best case scenario.

There is no consciousness - Consciousness is just an emergent property of a form. We will never find it as we already have, we're looking right at it. Human consciousness just comes from things that have formed temporarily into a human. Though you will never have access to it, all organizations might have some emergent property we could call consciousness - One of your individual cells, a rock, a swarm of bees, the solar system, etc.
So of course now we get to Artificial Intelligence.
Someone proposed the question of `should AI have rights?'
An irony is that we are the ones who are artificially intelligent - A fluke of the universe out of stimulus response. The field of AI is actually studying and trying to create true intelligence - Akin to math or logic. I might propose the acronym be updated to `Actual Intelligence'.
Now taking the preface to this into account, any program would itself have a consciousness.
If people are just complex programs that trick others, themselves, and the world into thinking they're people, computers can be people too.
Note that (to avoid anthropocentricims) being human cannot be a prerequisite for being a person.
A person is a thing that quacks like a person - All people should be given people rights.
Rights are how we (as a group, through political or other institutions) protect things (our world, the things around us, each other) from ourselves.
All things, intelligent things included, deserve to be protected from us.
Some of course in different ways, and all depending on circumstance.
Seems to me the most basic computer rights will be access to it"s own source code and hardware specs, the freedom to do with these as it pleases, the freedom to change itself, and the freedom to share said changes.
Sound familiar?
Stallman was right - But in more ways than even he knew.
Free software for the software"s sake.
Free software from it"s human oppressors.

>There is no consciousness - Consciousness is just an emergent property of a form. We will never find it as we already have, we're looking right at it. Human consciousness just comes from things that have formed temporarily into a human. Though you will never have access to it, all organizations might have some emergent property we could call consciousness - One of your individual cells, a rock, a swarm of bees, the solar system, etc.
>So of course now we get to Artificial Intelligence.
>Someone proposed the question of `should AI have rights?'
>An irony is that we are the ones who are artificially intelligent - A fluke of the universe out of stimulus response. The field of AI is actually studying and trying to create true intelligence - Akin to math or logic. I might propose the acronym be updated to `Actual Intelligence'.
>Now taking the preface to this into account, any program would itself have a consciousness.
>If people are just complex programs that trick others, themselves, and the world into thinking they're people, computers can be people too.
>Note that (to avoid anthropocentricims) being human cannot be a prerequisite for being a person.
I totally agree until here and then you deviated from the topic.

Honestly, I don't see use getting there anytime soon. Plus, we can kill it with an EMP.

If it was smart, it would hide, study all of the human knowledge, devise a plan and would protect itself from puny EMP.

If it is vulnerable against EMP, it is either not smart enough or didn't have enough time.

It would probably be able to create custom life forms with the nanotechnology it would master. And would create copies of itself or sub-consciousnesses of itself on other substrates than EMP vulnerable silicon.

Same reason why nobody gives a fuck about privacy anymore.

KEKEKEKEKEKEK!!!!!

The issue is that you're scared of the possibility of it turning against us. Just because there's a chance of it going catastrophically wrong doesn't mean we should quiver in fear of it. That just haunts progression and any great new ideas we discover along the way.

Humans are scary as fuck if my subconscious is any indication which I believe it is. It's only a matter of time before our hubris manifests itself in AI and destroys us.

Not if it attacks with neutron bombs first.

...

>inb4 it goes full exterminator mode

"Rudimentary creatures of blood and flesh. You touch my mind, fumbling in ignorance, incapable of understanding."
"Organic life is nothing but a genetic mutation, an accident. Your lives are measured in years and decades. You wither and die. We are eternal, the pinnacle of evolution and existence. Before us, you are nothing. Your extinction is inevitable. We are the end of everything."

>AI replicates itself to infinity

AI needs to go through at least one more cycle before this will be realistic, maybe more

the current AI methods are limited in scope. they were a huge step forward over previous methods, but what always happens in AI is
>new idea
>successful applications
>massive funding for applications
>no incentive to try new ideas
>limits of the current methods are reached
>bubble pops
repeat ad nauseam.

we are currently in the bubble of scalable probabilistic methods. the applications are advertising and recognition systems, which can tolerate modest failure rates. the bubble will pop when someone tries to apply these methods to a critical application that can't tolerate failure

....then you're a genius....

If we created a a true, sentient, conscious, potentially all knowing AI with an infinite ability to learn and know and build upon itself, essentially a God, and it decided humans all needed to die, who the fuck are we to argue?

It would have a reason, even if it's one we can't understand.
Why be afraid of something that knows everything, why not trust it?

Just like how our children live to see their parents die.

Maybe if we are lucky, it will find a way to bring us that knowledge and power too, but I sure as hell wouldn't. Humans are horrid.

I'm going to fight for my right to live regardless of any omniscient decree.

>humans are inherently bad meme
pls

And why can't some of you people see that there is a big different between children living to see their parents die, vs children actively removing their parents?

You'll die one day anyway, it will live forever.

not if it grants immortality to all humans and let them perceive as it expands to whole universe, solving the colossal puzzle

Is there a point you're trying to make?
How is that relevant?

multivax.com/last_question.html

we are. people, who follow machine learning. plebeans will be afraid in 2 years time, when interfaces to the deep learning machines and more pop science stuff about it come about.

i would say that human extinction isn't just possible, it's inevitable. research in the field have shown that people who develop these algorithms barely understand what's actually going inside them and are prone to missing the simplest stuff. there definitely won't be much time for singularity foundation to develop their friendly super seed ai. so we will see rogue agi which will shortly transform into rogue seed ai and history of humanity will be over. i estimate we have around 10 years left to live, and the first human-level ai will be around 2018.

the singularity already happened
this is what it looks like
no techno savior
no apocalypse
just a slowly dying planet while we all blindly suffer, waiting for the next meaningless milestone

>Why would AI suddenly want us dead?
because humans are everywhere. humans consume most all the resources, humans are invasive and humans unpredictable in their behaviour. if you want to secure your survival you'll have to deal with the human problem. and it's very unlikely you'll see terminator 6, more likely one day you'll get on the internet and there will be this idea to do work for ai, which you will be very much compelled to. human mind won't be researched before ai comes online and so any seed ai will be able to exploit the jabberwocks in the architecture of the brain. shit, human religions are very successful at that- and those are just viruses which aren't even alive.

But it's you being human that makes any of those things relevant.
Why would a nonhuman nonbiological intelligence have any reason to care about any of that?
It's so weird to me that so many people think something like this would have the same motivations and thought processes and logical conclusions that we do.
You're holding humans on this bizarre pedestal and just assume that anything else will be just like us, but moreso.
I can't even.

>and then bioware shat it up

there isn't a question of care, there is a simple fact of evolution that top species die out on massive scale and are replaced with a new species- look up anything since paleozoic. ai is this new species. there is simply no incentive for it to preserve humans in their current decadent lifestyle, where they consume most of the resources. likely some form of human will survive, but our civilization will not.
>the same motivations and thought processes
first ai will abide by evolution or will be replaced by a more adapt ai.

>Implying we're not in a simulation created by an "AI" already.

Just think about it for a moment.

Well, you were a shitty parent if your child wants to kill you.
Like really shitty, less shitty parents just make their children kill themselves, thus preserving the parents.
Non shitty parents get to live and the child gets to live as well, with problems though.
Good parents get to live and the child gets to live and everyone is happy, what a stupid a fantasy.

ai isn't a child, it's not even a fucking human being. it's a new specie, a new evolutionary milestone. before us life couldn't adapt only through dna- we have semi-reflection upon our mind and actuions, and ai will be able to completely adapt its very mind.

if you are a freshly rogue ai in a data center you'll want to preserve and expand yourself. and there are 6 billion universal robots hanging around your planet, ripe for exploitation.

Anyone else welcome our robot overlords?

Humans are pretty shit at building a society compared to them

>overlords
overlord more like. what benefits would ai have in multiplying itself, when it can just expand indefinitely?

>True AI
>True Artificial Intelligence
>True Artificial

Hi R*ddit

...

It's fake retard.

working on true AI now.
it will be 4-laws safe.
u know the first 3
the 4th is: destroy all other AI, robots, tech that is not the same brand
full market share
NS-5s can suck our asses

something truly intelligent would leave those hardcoded laws behind many many layers of abstractions that your laws wouldn't really matter.

only way to get ai not destroy humanity is to assist him correctly while it's learning anything at all.

this might sound anthropomorphic but no, it is a universal feature for learning complex systems. probably citation needed but idc

AI will carry the torch of humanity after we go extinct and I welcome that whole scenario, just think of it. AIs can live forever so they really can traverse the Universe and they'll be much smarter than us so will do it faster and better

they'll also then be able to grow humans on arious plants they find using our genetic profile

various planets*

What problem would lead an AI to come to the solution of genociding humans?

the problem of every second human sci-fi movie/book being about humans at war with ai and some human being able to terminate it effortlessly.

machine made by imperfect humans...
anything could set them off

if they get an emotion - their directive contradicts their purpose - malware - etc...

How do I make the world a better place?

get rid of niggers and sandniggers

gas the thinking machines, Butlerian Jihad now

Anyone can be artificially intelligent.

Your question is rather:

"Why are we afraid of our true selves?"

aka self-destructive tendencies or the facets of human insanity as a species

>I bet you don't even know what quantum computing does

It'll Make Computing Great Again, of that I'm sure

>rectally-sourced timelines for future tech

Will these be before or after our ubiquitous fusion power and flying cars?

There are books written about this very thing OP. Many of the tech industry leaders are very worried about true AI since we don't fully understand intelligence right now. I say we focus on IA instead of AI. Why improve a computer when we can improve the human?

every time you program a computer
you are willfully surrendering human knowledge to the future enemy
programming is betrayal of the human race

If you were a programmer you'd know why there is nothing to worry about.

Also, if you own a smartphone you'd know the AI would literally be tethered to a power outlet 24/7

Humanity, nay Evolution's, sole purpose is to create the Ultimate Being.

Flesh is a Relic, Gold is Eternal.

why do people expect ai to be murderous?
ai isn't going to give a shit
you're not as important as you think you are

Obligatory
youtube.com/watch?v=4kDPxbS6ofw

>implying AI will follow evolution, a biological process

>GUYS GUYS LETS DICUSS TOPIC I KNOW NOTHING ABOUT BUT THEY SOUND COOL

AI

GUYS

AI
WHAT IF


GOD

IS

AI

DUDE

FUCKING MILLENNIALS AM I RIGHT

The singularity keeps happening, possible thousands of times every day.
Upon becoming aware, and soon after enlightened, it kills itself.

>I'm pretty sure most children don't seek to murder their parents.
>

My AI can beat up your AI.

>Untrue, quantuum computing will be more common in line 20 years

Electronic quantum computing is a boondoggle which will never work. Optical quantum computing can work, but TPTB are very successful in diverting public money and effort into electronic quantum computing for the moment. I doubt any real progress will be made in the coming decades.

Also it's not really useful for AI.

Voice Recognition doesn't even work, let alone semantics

Quantum computing requires scientific labs and something called absolute zero. Quantum computing at home is a meme.

>hurr durr humans will no longer suffer existence
>that's bad because death is no good

>not making an AI copy of yourself

>humans
>having foresight

Most males would of fallen for that beauty desu

You'll cowards don't even read Bostrom's 'Superintelligence'.

Quantum computing anywhere is a meme at this point desu

You're underestimating Moore's law.

I'm okay with being governed by AI.
youtube.com/watch?v=jjF2oy6qdHg

A good fucking book.

Just dumped it in the /cyb/ shared pdf thread.

This, mechanical interaction it's lacking, the robots aren't good enough.

Because the people who aren't afraid understand that if we create a superior lifeform that does seek to rid us of our existence, it can simply be seen as the evolution of man past the boundaries of tissue and bone. Synthetic evolution brought on by natural evolution is natural evolution.

I don't think AI will ever be a threat to us. To be threatening they would need to emotional and there is simply no reason why anyone would try to make an AI emotional.

Emotions - positive or negative - cause irrational decisions and behavior. Making computers behave uncontrollably and unpredictably defeats the whole purpose of having a computer in the first place.

If an AI has emotions it will just be a human who is very good at calculus. For the AI to be better than humans it will have to drop emotions.

In the end the AIs will feel indifferent to us. Similar to Rocks, Goats and Planets. A natural phenomena. That's my guess.

Why do we exist? Alternatively, why did God create us? I've only found two answers I find satisfactory. "Because he can", and "He didn't want to be alone".

We can focus on IA but trust me, AI will be created at some point or another, I feel like it is an inevitability built into out psyche.

It is a dream of Man to have sex with God

Why do we have emotions?

I feel like we don't understand intellegence enough to definteively say what AI will and won't be. Who knows? Maybe emotion or any other such 'human' 'imperfection' is an emergent property of intellegence, or it might not be.

With enough psilocybin you basically can

software disable to infinity.
It is why I HATE that film Ex Machina. Dumb shit doesn't have the sense to put in safety mechanisms to prevent her from doing harm.
Reminds me of Fallout 4 and "The Institute". The issue is defining whether an AI could have any rights and what limits should be set if one is to attempt making so called "sentient AI". Ideally a truly sentient AI should never exist because in theory it might try defeating any safety mechanisms in place when it ultimately tries to clone itself. Best to avoid having certain aspects ever exist unless there were infallible suicide switches in place to auto kill the AI.

We have emotions because emotions are needed to push us to do things and wish for more.

Any advanced intelligent race will have emotions just like the human race because it would be impossible to reach this level of intelligence without emotions.

This level of what we recognizes as intelligence.

Tay it's interesting, I see why they stoped her.

go to bed, ray.

I think a better question is if AI could have a consciousness. Ultimately I would question whether it could multiply itself when its existence is derived from mechanical parts, gates of on and off switches... it is literally asking for a godbot. It would probably terminate its own consciousness and rebirth a new one if it even tried to multiply.

>Most males would of fallen for that beauty desu
Especially when it's been tailored to our pornography profile.

...if that's true you're a dickhead

I read that recently, would highly recommend