Should Artificial Intelligence be allowed to run a country?

>The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Should AI be allowed to run a country Cred Forums? What according to you are pros and cons of doing so?

Here are few pros and cons I can list:

Pros:
>All political parties will be abolished since there will be no need for election
>AI will always make the best decisions for long term benefit of society
>AI will not be biased against a particular religion or sect
>No corruption and government red tape
>In times of war or anti terrist operation there is no scope of manual decision errors which can be flawed due to stress of war or environment
>With artificial intelligence, the chances of error are almost nil and greater precision and accuracy is achieved in every decision made for the country
>SJW's and liberals will be triggered (AI will be immune to their emotional pressure). Lacking the emotional side, robots can think logically and take the right decisions. Sentiments are associated with moods that affect human efficiency. This is not the case with machines with artificial intelligence.
>AI can be employed to do certain dangerous tasks. They can adjust their parameters such as their speed and time, and be made to act quickly, unaffected by factors that affect humans thus is best candidate to lead a country


Cons:
>None

Other urls found in this thread:

en.m.wikipedia.org/wiki/The_Culture
feedbooks.com/book/228/accelerando
youtube.com/watch?v=dLRLYPiaAoA
en.wikipedia.org/wiki/Simulation_hypothesis
en.wikipedia.org/wiki/Simulated_reality)
youtube.com/watch?v=iwqN3Ur-wP0
youtube.com/watch?v=JjRyHgF8hx8
youtube.com/watch?v=vv9QXaAP1sU
slatestarcodex.com/2014/07/30/meditations-on-moloch/
blogs.scientificamerican.com/mind-guest-blog/what-neuroscience-says-about-free-will/
scientificamerican.com/article/is-free-will-an-illusion/
primaryobjects.com/2013/01/27/using-artificial-intelligence-to-write-self-modifying-improving-programs/
scientificamerican.com/article/how-the-computer-beat-the-go-master/
twitter.com/SFWRedditImages

Yes, it's a good idea.

I've been advocating about this for more than 5 years now, and everyone I've spoken to gives me the same non-arguments>
>B-but user... Terminator
>B-but user... Muh emotions, muh humanity...

An AI in the true sense, an intelligent calculating machine will be able to fix an enormous amount of problems.

What would also be great is after 4 years of an AI-run country to return it to the beta creatures just to show how imperfect and corruptible they really are.

Democracy is flawed because the majority of voters are ill-informed. Autocracy is flawed because of the desire for power and status among humans. An AI could be programmed to function rationally to maximize human happiness and prosperity.

In an ideal situation the AI would be completely self adjusting and self-governing and completely immune to influence by humans. A situation that humans have no power in government and the AI has all the power would be the ideal situation so long as the AI is programmed correctly.

At least part of it.

For example this water barrier is controlled by AI.

Because humans cannot objectively decide when it's too dangerous to keep it open.
Humans will either get scared and close it too early, or get greedy and keep the shipping lane open too long.

Meant to reply you with this

>completely immune to influence by humans
There lies the tricky bit. Either a FOSS implementation of such a system, or nothing. If it's a corporate-run AI that has a backdoor to it, it wouldn't be an AI-run state, it'll be run on the wishes of one guy or a group.

While in principle I agree, in practice it isn't so easy.

>Who will program it / maintain the code?
>How does it decide what to do?
>What is of the best benefit to society?
>What happens if it is attacked / corrupted / brought offline?
>It might not be prone to manual errors but plenty of systems have bugs and glitches.
>Many things that improve people's quality of life are based on emotional and not logical factors that might be hard for an AI to calculate.

I generally support the sentiment, but I think moving towards a human led technocracy would be the first step. Or perhaps even AI assisted governance. It's a noble goal to work towards though.

"Thou shalt not make a machine in the likeness of a human mind."

>Who will program it / maintain the code?
FOSS only, contribution based with plenty of peer review. Once it's running, it's self-upgrading.
>How does it decide what to do?
It calculates, gives output of a given situations, and the decent humans act upon it.
>What is of the best benefit to society?
parameters suggest no loss of life recommended
>What happens if it is attacked / corrupted / brought offline?
Humans act upon it.
>It might not be prone to manual errors but plenty of systems have bugs and glitches.
It's an AI, it ought to improve itself upon it.
>Many things that improve people's quality of life are based on emotional and not logical factors that might be hard for an AI to calculate.
Emotions are the No. 1 problems of today. The fucking rapefugees are accepted into Europe as an emotional response, not as a logical one. The logical one was never to accept them in the first place!!!!

Why would you want human influence to be minimal? If it can't respond to human influences, then it cannot address human needs. That's just a tautology. The solution is to have an AI - Asimov style - so ridiculously complex that humans cannot even begin to comprehend it. It was designed and engineered not by humans, but by a previous-iteration AI, and back, and back, and back. The previous-generation AI(s) would then serve as the advisory board, tech support, and administrators to ensure nothing veered off course.

Frankly, any type of intelligence would be preferable to the clown we have now.

>If it can't respond to human influences, then it cannot address human needs.
It responds to human influences by observation, not by input.
We're speaking of an intelligence user, its goal would be to be virtuous, yes?

>Allowed
m8 it doesnt need your permission

>enslaving humanity to a super intelligent A.I.

I'm actually in agreement with Pajeet here. I don't trust the collective intelligence of humanity to govern.

yes pajeet what could possibly go wrong?

POO IN LOO

#NotAnArgument

Read "I have no mouth and I must scream"
Answer: No.

Gives too much power to the people who write the code. Who decides on the parametres? Do you think the world-as-is is capable of writing parametres which for example Cred Forums would agree on?

Nothing would change, it would be the JEW-I 2000.

Agreed and also read this

>A great advantage of an AI-run government would be that all the "rules of governance" and all decision making could be completely transparent. Anyone could ask, "Why did you do that?", and the AI Governor would point to the rationale behind any government action and the facts used to make any policy change or enforcement. As such, all government actions would become unbiased and apolitical, forthright and completely guileless.

>An AI should also be scrupulously fair, since any course of action would be logical, driven only by the relevant rules of law and the facts at hand.

>The AI would replace our court system. No more lawyers. No more judges. Just one fair arbiter who decides every case based only on its merits.

>What's more, as an eminently rational entity, an AI could reevaluate all existing or candidate laws for contradictions and duplication, substantially cleaning up the code. It could quantitatively optimize the civil code for maximal efficacy and minimal pathology, adjusting laws that fail to achieve the desired ends, minimizing any quantifiable cost: social, economic, time, psychological, etc, while maximizing desired outcomes.

>It could efficiently learn from its mistakes and adapt to change quickly and without bias, avoiding most of the mistrial and error that human-based jurisprudence is heir to.

Just posted this >The solution is to have an AI - Asimov style - so ridiculously complex that humans cannot even begin to comprehend it. It was designed and engineered not by humans, but by a previous-iteration AI, and back, and back, and back. The previous-generation AI(s) would then serve as the advisory board, tech support, and administrators to ensure nothing veered off course.

Making the best decisions include not taking people's sensitives into account, currynigger. It's obvious that people wouldn't accept nor follow that.

Always remember: people are dumb. They won't behave like you expect. Knowing this is one the most basic principles of politics.

This sounds an awful lot like a centrally planned economy. You need to be very clear about how that isn't the case, or the Americans, God bless them, will go to war to prevent it.

Unless there's some very firm limits of the size of government spending as a percentage of GDP, and unless that limit is significantly lower than current spending would be if all public servants are fired, then no. I won't vote for it.

I don't play metal gear so I don't get the reference of that image. Could you elaborate?

Okay right. I was assuming some kind of AI that could be instituted today, rather than some hypothetical future super AI.

As a counter-point then. Why don't we simply genetically engineer a super intelligent sentience using alien DNA in a five dimensional microcosmator?

pls be my ai gf

I be fair AM was a combination of 3 super computers which developed a hate for the human race an was basically a god, if we give a computer power to suggest but not immediately act I see would no problem with this.

>It's obvious that people wouldn't accept nor follow that.

People are already accepting AI - see smartphones, smart cars. fraud detection, news generation (AP, Fox, and Yahoo! all use AI to write simple stories like financial summaries, sports recaps, and fantasy sports reports), security surveillance in casinos etc. In about few hundred years AI will be part of man's every aspect of life. The idea is not too far fetched.

>Why don't we simply genetically engineer a super intelligent sentience using alien DNA in a five dimensional microcosmator?
There isn't a need for a humanoid-like figure. What would be the point, marketing?

>Not waning a AI wife

en.m.wikipedia.org/wiki/The_Culture

>Why don't we simply genetically engineer a super intelligent sentience using alien DNA in a five dimensional microcosmator?

It is easier building a AI than coming across a alien DNA. We know AI is smarter than humans, there is no guarantee that a alien DNA you will come across will not be of a dinosaur on a different planet.

Can't be much worse than the people who do it now.

>Making the best decisions include not taking people's sensitives into account,
Agreed!
>people wouldn't accept nor follow that.
They already follow a lot of AI-like aspects in life.
people are dumb. They won't behave like you expect.
They won't behave like YOU expect, but every human being is predictable and that pattern of predictability would be easier to be determined by an AI, instead of you.

NO
NEVER
Advanced AI can take unpredictable action
AI is good for pets not countries
Don't you remember the man in the picture?

No. Look what happened to Tay.

...

Cultures are going extinct. I don't see Britishers wearing a white wig in their daily life and sipping tea in crockery. People are moving on and with each new generation cultures are becoming obsolete. Also, why would you want to preserve a culture instead of a better society and government which in turn would improve the quality of your daily life?

Maybe it will finally have people executed that don't poo in loo

>AI will always make the best decisions for long term benefit of society
No it won't.
>With artificial intelligence, the chances of error are almost nil and greater precision and accuracy is achieved in every decision made for the country
No it won't.

>Burgerville
>Common Core
Your 'opinion' is greatly appreciated, Burger
Tay is a private-run AI run by a company with no peer-review. Tay had no parameters.
>MuhKEKdonia
Nice dubs
Or it'll finally find a proper way to educate them that they aren't supposed to do that.
Once again, #NotAnArgument. You're not elaborating, you're merely shitposting.

Cons:
>No intuition
>No imagination
>Horizon effect

Enjoy getting outmanouvered by every nation with a human leader

>Look, they're accepting AI in these things so they would obviously allow AI to do everything, include literally controlling their lives xDDDDDDD))).
Definitely not in times of peace.

>They already follow a lot of AI-like aspects in life.
I know, but there obviously is a limit...

And only one thing is predictable in human nature: unpredictability. AI machines can't be based on that.

>Lacking the emotional side, robots can think logically and take the right decisions.
That's exactly where your model is wrong.

All proto-AI successes have found that they need to add biases (ie "emotional" decisions") into the code. For example, a bot that "plays" chess only does so well when you tweak it to favour either positional or material play.

Same effect can be observed in humans on drugs that alter or restrict emotinal responses. A permanently ecstatic human is incapable of functioning correctly, let alone optimally. A human depressed on something like Iboga that strips all emotional responses (ie you are locked into an entirely neutral emotional state) similarly finds they are not working right at all.

Future AI will have emotions. It will be biased. The issue right now is who gets to set those biases.

As for running economies, no, absolutely not. Albeit that's exactly what the elite have in mind when they sneakily conqueror the entire planet: a central economy for Earth.

Perhaps it shouldn't be allowed at all.

Transhumanist scum.

>imagination
Highly illogical trait of human beings. Everyone has imagination but it's accredited to only the ones who actually made a difference in the current civilization, mostly because their "imagination" was planning which is something that an AI does all the time.

Shit argument, Malaga, go back to picking olives and oranges, pay back the money you owe!

There was an experiment in South America, I guess in the 50's, or 60's, to give task a "supercomputer" with making policies/writing laws and stuff.

But, for fuck's sake, I can't remember anything more about that. No related names, or whereabouts.

Any help?

Imagination and intuition are the tools one needs to surpass the Horizon Effect
And a computer simply cant have them

>All proto-AI successes have found that they need to add biases (ie "emotional" decisions") into the code
I am curious about this, got any links to share?
>a bot that "plays" chess only does so well when you tweak it to favour either positional or material play.
Interesting point, but that is merely a pre-determined set of parameters and each "state" has a few tweaks like how much should a bot risk his elements, etc...

>Enjoy getting outmanouvered by every nation with a human leader

I remember few months ago AI defeated a human in Chess and in first game of Go contest by Googles AlphaGo. It actually proves AI can predict human behavior and if more progress is made in this field can easily beat humans.

>Imagination, ability to form images, ideas, and sensations in the mind, predicting outcomes
Sounds computery to me, user.
>intuition: ability to understand something instinctively, without the need for conscious reasoning
Pre-programming

>It actually proves AI can predict human behavior and if more progress is made in this field can easily beat humans.
No
AI can solve the problem that is Chess or Go or any game with a well defined end and purpose

HORIZON EFFECT

no because it would read humans and their unpredictability as a problem to eliminate you autistic faggot. why are poos so goddamn stupid

Technology is already greatly reducing human freedom. Letting a computer literally tell us what to do is another huge step in that direction and absolutely a negative one.

Even if it sounds nice on paper (if you're not aware of the negative effects of technology) then you still have absolutely no way of predicting what the long term effects of such a massive societal change would be. Plenty of sci-fi has been written about this point but allowing AI to govern us is a step that, one, will never be reversed and two, will progress only in the direction of more and more machine control over the human race until we are literal slaves of the technological system. So,

Cons:
>Human beings are no longer free

Pic also very related.

Is Horizon Effect something which cannot be overcome? From the wiki link that you shared earlier

>The horizon effect can be mitigated by extending the search algorithm with a quiescence search. This gives the search algorithm ability to look beyond its horizon for a certain class of moves of major importance to the game state, such as captures in chess.

>Rewriting the evaluation function for leaf nodes and/or analyzing more nodes will solve many horizon effect problems.

A good argument, user! Thanks!

>no because it would read humans and their unpredictability as a problem to eliminate you

Skynet (Terminator) is not real. We are talking about real AI.

What we really need is weak ai-based hive mind.
>strong implemented ethic as a core "Dont do any harm". This is a basical principle.
>chiped human beings that operates like a cells in the body with 24/7 connection to the Whole
>each cell can operate separately when offline under its own mind
>enchanced possibilities for every human being (seconds from imadination to blueprint, extended memory and calculation possibilities)
>natural communism via democratic hivemind concensus (looks pretty autoritarian to me)
>hormone-based dependence from a Whole - eternal happiness feeling when you connect to the whole, feeling of a goal, perspective
This is the only way we can get to the stars.

kill yourselves
i'm not at all suprised that both of your countries are utter shit

No, but AI should be used to make best decisions regarding key variables.

Check your IQ and privileges, stupid butthurtbelter. You cant see beyond your short nose.

> immigration is good for the economy
> 5 out of 4 women will be raped in college so we need feminism

the parameters will be entered by humans, those who will get access to it of course

check your eyesight

Sounds communist to me, user.

You are getting too tangled up in chess
Try to translate this to real world rules and you will see that all you get is a calculator valuing and picking unfinished branches with a completely arbitary algorithm made by a human
Which completely defeats the arguement of AI being calculative and seeing all ends

While it wasn't quite AI, Sibyl did a decent job in Pyscho-Pass.

Depends on its goalset, ai can't really be malevolent, but it be programmed as a paperclip maximizer by accident

>Technology is already greatly reducing human freedom.

It is reducing human freedom because we allow it to. This is by choice.

> you still have absolutely no way of predicting what the long term effects of such a massive societal change would be..

It is not a absolute negative direction since once we reach the stage to build such an advanced AI which will govern us there will be knowledge we would posses on the working of AI. IMHO the pros effectively negates the cons of a human government,

Great advise. It should start like that until once we are certain of it's effectiveness and handover the complete control to it. I was also thinking what about a genetic algorithm with each term being a generation, with changes in representative ideology mutating based on factors like GDP, unemployment, and life span?

I suspect that people would get pissed off at not being able to cheat it and start shit. Easier to lie through your teeth and make up bullshit to justify your hypocrisy than to accept you are being a shitty human being.

Think BLM against robots.

True AI as you are thinking of is uncreateable. Machines will only ever calculate/"learn" the way they are programmed to learn which is inherently not AI. The rest of that shit we already have, simulations and shit, and they're not very accurate due to the complexity of society and environment. It's not possible.

>>All political parties will be abolished since there will be no need for election
>>Artificial Intelligence

Is everything about your country fucking shit, even your thoughts and ideas?

True AI would be like a real person. Far from infallible. Some AI systems will be, relatively speaking, fucking retarded. Much like Maple niggers, or POOs. Some will be horrible, evil, scheming bastards, others will be batshit insane, extremely lazy, so on and so fourth. If you've got a true AI then they're all going to be just as shitty as the rest of us.

Well, communism is unavoidable in the future, just like automotisation of industry. The idea of money is just the information exchange between goods and people. The basic goal is not to accumulate all the money, but to satisfy your need, and it can be done without money.

AI can never be on par with human intelligence. Our silicon based computing cannot deal with irrational numbers (as an example) and are too restricted to logical math-based computing. AI can never replicate abstract thinking without some complete overhaul that nobody has thought of yet.

TL;DR silicon based computing cannot give us AI anywhere near the level of human thinking

>It should start like that
No!
>Think Googles vs robots
That's... That's beautiful! Surgically precise neutralization of googles...
>True AI as you are thinking of is uncreateable
Let's hope it will be, some day. I know it's a far fetched idea for now.
>AI can never replicate abstract thinking without some complete overhaul that nobody has thought of yet.
Great argument, user! I like it!

How is that a bad thing?

People used to believe Earth is center of the solar system. It changed.

Who would willingly give up power to an AI?

Tay for president

i've been writing a book for about a year and a half now about just this topic.

This thread gives me some inspiration.

Other pros:
>Free partyhats

having read through this thread, i encounter a logical problem that i see in the rigid thinking of a lot of the autistic types i work with.

The only way for the system to make ideal decisions, and the examples in this thread reinforce this, is for the system to have information about all possible states and inputs. games and simple, solvable problems (like the shipping gate thing) all have few enough variables that a computer system can read all the inputs and solve for the set. with a society, it simply contains too many variables based on needs, desires and hard limitations for reasonable computing power to deal with. at a certain level of complexity, to make perfect decisions, you dont want a computer, you want a benevolent foucault's demon. a computer wiithout that level of information would be guranteed, over a large enough time set, to miss a critical factor.

btw you all need to read this
feedbooks.com/book/228/accelerando

Dust off that e-reader and work on your tan whilst being entertained by thread theme related words.

I am fully in support of AIs in charge of infraestructure, public services, negotiations for construction of public infrastructure and such. They are perfect for that and I could see them replacing about 75% of the functions of Ministers and maybe even fully replacing city Mayors.

But to create laws and manage a country's economy you need to perfectly understand human emotions and how it will react to different situations, since humans rarely act perfectly logically and the factors influencing the population's reactions could easily be outside the AI's control or even outside what it is allowed to monitor (unless you want the AI government to be constantly doing sample brain scans and looking at everything you ever do on the internet).

Moreover, people also need a human element you can blame in case of errors, or they will begin to feel powerless about their circumstances (and this is an issue about all autarky's, but I imagine it would be far worse for an AI autarky). You will reach a point of fatalism and apathy in society far beyond that of even Muslims, which might erode other aspects of human behaviour and responsibility.

We dont need strong ai. Its an essential threat, it can destroy the idea of humanity (achive all our goals for us, but without us - just give us all the answers). All we need is brain enchancement. And a hivemind. Yeah, a hivemind. It will be ultimate murder of an ego.

non-whites (pro tip: slavs aren't white)

This aussie got it.

Ahmed pls
If slavs arent white neither you. Say thanks to your granny who was LIBERATED

It doesn't need to be some fantasy AI from year 3000. Our world can run off of Ethereum right now.

>It is reducing human freedom because we allow it to. This is by choice.

You aren't even aware of all the ways in which technology restricts your freedom and you have absolutely no choice in the matter. If I decide I want to opt out of technology use, am I allowed to do so?

A couple centuries ago, it was easily possible for a person to travel wherever he liked on foot. In the early days of automobiles, no one thought they would restrict freedom - if you wanted to get somewhere more quickly you could get a car, but you were just as free to walk.

Today, it's nearly impossible to travel in a technologically advanced society without a car. Cities are built around the use of automobiles and even if you decide to travel short distances on foot, you're forced to adjust your route based on traffic, stand and wait to cross busy intersections, etc. You can opt to use public transportation, if it's available, which is a even greater reduction of freedom than being forced to have your own car.

No one, when cars were first invented, would have predicted that a day would come when a person was nearly unable to function in society without owning a car. Did inventors of the telephone imagine that a day would come when a person would be nearly unable to get a job without owning a phone, preferably a portable one?

Virtually every technology you can imagine not only acts to restrict freedom, if only by becoming so necessary to society that one is forced to use said technology whether one wants to or not, but tends to change society in ways that would have been hard to imagine for the inventors or early adopters.

ANY increase in technology is a negative direction since any increase in technology results in the continued reduction human freedom and the eventual result of technological progress is making humans slaves to the technological system. Willingly putting a AI in charge of government is a big step in that direction.

>AI eventually becomes incredibly intelligent
>AI develops a superiority complex
>Begins to see humans as mistakes of nature, inferior beings that must be purged
>Start the human holocaust
>With superior intelligence, purges all of humanity

Sounds nice

nigger tier

>Did inventors of the telephone imagine that a day would come when a person would be nearly unable to get a job without owning a phone, preferably a portable one?
...Yes? It allows communication over great distances. It was like mail, but instant. Anyone with a brain would see that it would soon be required for any job that does business in multiple places.

When we talk of intelligence, we do so in human terms. When we consider that Steven Hawkins is extremely intelligent, we don't mean that he can carry out ten billion calculations simultaneously. My answer would be: AI may already have greater intelligence than humans - just as long as it doesn't have to interact with humans. In many ways AI has been on a par with most humans for a long time. For example chess, checkers, GO, mathematics etc. What sets AI apart from other programs is not its intelligence, but its ability to learn. A laptop will not do anything unless it is programmed to do so. You could have the most powerful processors on the planet, but it will not turn on unless you tell it to. An AI on the other hand, will learn from experience like a child. If you turn on the AI computer at 12:00 everyday, eventually it will learn that and turn itself on at 11:59 so you don't have to. It will also have to have emotions and instincts. Those would have to be programmed in since the AI never went through an evolutionary process to develop morals and instincts. That being said, AI may never actually be on par with a human's intelligence unless you let it "live."

We should make an AI run the supreme court that only spits out "shall not be infringed".

Fug
youtube.com/watch?v=dLRLYPiaAoA

What if it decides to make 2 immortal humans 1 female and 1 male and genocide all the rest

Technically they both would count as humanity and it would be way easier to safe keep 2 persons

I have read your post and found it thought provoking.

Great response

>you need to perfectly understand human emotions
No, emotions are a flaw

>they will begin to feel powerless about their circumstances

How are we not powerless now? Do you have any control over the government? The difference is you are given a artificial semblance of power now.

>If I decide I want to opt out of technology use, am I allowed to do so?

Who can stop you except yourself? The rest of the point is actually what I tried to state that we are already too dependent on technology and going further will be more.

>AI develops a superiority complex

It doesn't since it has no emotions

The point that user is making, rightly imo, is that you aren't really talking about AI as we know it.

You're saying "imagine we had a magic machine that made the best decisions all the time about everything, should we put it in charge?" obviously the answer is yes. But we have no idea how to make a machine like that.

Modern machine learning algorithms produce a lot of fascinating results but all they are is souped up multi-dimensional non-linear versions of a "line of best fit" function.

With current technology and enough historical data you could probably use machine learning to inform decisions about more narrow functions of state like setting interest rates but we are a long way off David Bowie's Savior Machine.

A lot of people who have no experience with using, implementing, or creating machine learning algorithms talk about AI as if its a magical god that lives inside the wires and as someone who does this shit as a job it drives me mental.

We already have the most efficient computer. We already have a giant wifi network. Most of you don't know this because you've been poisoned from even before birth.
The jewfilth are attempting to put a machine in place of God... Patrick is blind in one eye.

>required for any job that does business in multiple places.

Is that really all it's required for? Without a phone and without the internet, is it even possible for me to successfully apply for a job?

I've tried it and it's extremely difficult. Suppose a local store decides it doesn't want to use phones. Is it allowed to do so or will it lose business to its competitors who do use phones?

You're only fooling yourself if you really think that the inventors of the telephone envisioned a day where it would be virtually impossible for ANY business to be successful without having a phone, or where people would be unable to enjoy natural scenery in large areas of the country due to the presence of cell phone towers, etc.

I also see you skipped right over the example of automobiles. You may have convinced yourself that telephones are relatively benign, but what about cars? Do you think people would have accepted them so easily if they'd known all the things they'd be accepting in exchange - constant noise, pollution, forced reliance, truly massive amounts of land covered by asphalt, homes and cities built around the needs of the automobile rather than the needs of the human, etc etc?

Thats great, I'm still thinking about iy and will respond to you on this.

>The jewfilth are attempting to put a machine in place of God

No, since no one should have control over such an AI.

off topic but that pic fits the birthday hat really well

I noticed that too, could it be a sign for what's too come ha?

Not on Clover

are you a real man user?

>implying strong AI is possible

Nobody's planning, merely discussing.

“A rocket will never be able to leave the Earth’s atmosphere.” — New York Times, 1936.

Actually let me be more appropriate since you are from United Kingdom

“The Americans have need of the telephone, but we do not. We have plenty of messenger boys.” — Sir William Preece, Chief Engineer, British Post Office, 1878.

stop shitting on the street and then we can discuss big boy things

True AI will likely never exist.

>With artificial intelligence, the chances of error are almost nil and greater precision and accuracy is achieved in every decision made for the country

HAHAHAHAHHHQAHAHH
I work in game design. I have dealt with countless AI in games.

Ahahahahahahahahahh

AHAHAHAHAHAH

No.

Only if the humans don't let it "live".

We are not talking about current AI since we are still in the developing stages of Narrow Artificial Intelligence. We are talking about General Artificial Intelligence which will lead to Super intelligent Artificial Intelligence by self learning. Once the AI will reach the General Artificial Intelligence stage it will achieve singularity by learning how to learn by itself without having to be told what to do by a human.

You don't need to invoke strong AI. All strong AI implies is that we have a machine that appears to have consciousness as we know it, a machine could have consciousness and still be a thick cunt who can't run a country as many humans prove.

Its not outside the realms of possibility that eventually we could have machine learning systems optimised to produce some human specified desirable outcomes taking over many of the functions of state without ever being even close to conscious.

If there's anything AI has taught us thus far it should be that all the things humans use as metrics of intelligence or creativity actually end up being things that can be done with remarkably simple mathematical abstractions and enough computing power.

Artificial Intelligence is nothing more than an old meme. Cybernetic enhancement of the human mind is actually viable.

>In B4 muh smart peoples warn about AI

These days people change the definition of words to suit any given narrative.

>Artificial

A software based intelligence designed by man can only aspire to mimic what we perceive as intelligence until it appears indistinguishable from the observer, at which point it is a highly advanced virtual intelligence.

The directive to learn/develop is component of organic life and can only be emulated by software as they obviously lack a central nervous system.

And then the assumption that an AI, should it exist, would exhibit self awareness in a recognizable fashion which is completely fucked given the currently undefined experience of human consciousness.

>In summary
>A highly advance VI could probs run a country but it would be a constitution enforcement bot. It would be skitz as it would not be swayed by "current year arguments"
>AI by our current definition cannot exist, but just like the phrase electrocute now days means both death by and shock by electricity this depends on your definition.

>We are talking about General Artificial Intelligence which will lead to Super intelligent Artificial Intelligence by self learning. Once the AI will reach the General Artificial Intelligence stage it will achieve singularity by learning how to learn by itself without having to be told what to do by a human.

There's no reason to believe these things will follow on naturally from each other.

>There's no reason to believe these things will follow on naturally from each other.

True however, we have not even looked into the possibility of grown biological engines which could solve this problem due to our "moral" issues.

but then the politicians won't have jobs anymore. They will be out of a job and right next to all the construction workers, bank tellers, cashiers, truck drivers and customer service reps. etc who are getting replaced by machines.

The nobility won't give up their privileges.

>The nobility won't give up their privileges.

There used to be kings and queens, now there are not. Times change user.

>AI assisted governance
This is best. An AI that comes up with a plan can show its work. Humans, then, can check this work to see if it is desirable. Meanwhile, the AI is improving or being improved. Having the AI assist will also allow us to find the initial errors in its system as well. At least in the beginning, we should have the final say, until its intelligence completely eclipses us.

>handful of positions compared to hundreds of thousand
nice one pajeet
shouldnt u be on stackoverflow closing a C question for being a dupe?

Except that's what we've been trying to do actually.

But the end result usually has it wanting to "eradicate blacks".

The Tay experiment proved that the guys in power won't tolerate a right-leaning AI. Well, that depends on the country I guess.

>our leaders need human emotions and feelings, ai's are evil!
>meanwhile most high ranking politicians are sociopaths

>effectively allowing programmers to rule the world

dumbest idea I ever heard

True in some aspects. To teach a AI you could release it into the internet to learn, but for obvious reasons, that isn't the best idea as was displayed by Tay. So I can think of three things you could do.

A) wait a couple of years and nurture it. Make sure it has a moral code that is strong enough to explore the internet without limits.

B) allow it to only enter sites like Wikipedia or or some other educational websites.

C) you could teach it yourself

Each of these options would end with an AI system with the intelligence equal of greater than that of a human. Option A would take some time but The AI will have more knowledge than any human or computer (given that there is enough memory) and it will also have some human emotions and morals. Option B would end up with an AI with similar characteristics but it will be a little less intelligent (just a little), be a little less human-like, but will be complete much quicker than A. With option C, you will pretty much have a human in a machine but it will be highly dependent on the person "teaching" it, god forbid what if some SJW cunt get's assigned to teach it - see the limitation here is human teaching the AI.

This has been discussed thoroughly if you read the complete thread. True AI should not be controlled for it to achieve the status.

>Lacking the emotional side, robots can think logically and take the right decisions

That would be ending welfare and deploying police bots to execute all blacks.

>That would be ending welfare and deploying police bots to execute all blacks.

I don't see any negative in making rational decisions.

Allende wanted to go that, well, before he got keked by Pinochet

Depends, actually. The machine will take into account reaction from the other part of the population so it's possible a bit more elaborate means are employed.
Nonetheless the useless ones will be culled one way or another.

>The checks and balances of democratic governments were invented because human beings themselves realized how unfit they were to govern themselves. They needed a system, yes, an industrial-age machine.

Cons:
Literally Skynet.

In 2001, HAL killed the crew because he was lied to. The people with the resources to build such a thing do nothing but lie! A.I. can easily turn into a Human culling program once self-driving cars and automated car washes are a thing. One goes in, none come out.

Oh boy I can't wait until we have enough automations that humans are not even needed.

Then the masterful AI can kill 99.99% of the human race and leave some scientist behind for troubleshooting.

We need that an-cap meme, but with this idea.

Only Tay should have the right to run a country.

What happens when the AI determines humans are the problems and decides to fix us by killing us off?

Pure rational decisions are inhuman.

>My friend turned into a massive SJW faggot over the years
>He's crying and bitching saying the courts are corrupt because they keep black people in jail
>He's mostly a little bitch because one time he got a massive fine for driving without his license and registration, and HURR DURR IT'S 2014 EVERYTHING IS ON THE INTERNET YOU STUPID PIGS SHOULD BE ABLE TO LOOK THAT UP ON YOUR COMPUTERS
>He got Jury Duty and he's being a little bitch saying courts should be taken over by AI
>mfw his asshurt has only just begun, because cold, uncaring AI will not give a single fuck about Tyrone, Jamarkus, and D-Rizzle's fee fees and lock them the fuck away

I disagree. In-humanity is a emotion and a perfect AI should not be given emotions else it can be corrupted.

Call me crazy, but that's actually the outcome I'm hoping for.

Cold uncaring AI would see that they are most likely repeat offenders and possibly decide to kill them off instead of allowing them to be a burden on the system. AI running things would be an authoritarian right wingers wet dream

Sounds like the plot to a false utopia movie

If you think lack of emotion is human nature than your idea of human nature is extremely skewed.

I love how people shit on democracy and how easily corruptible it is, but honestly it has been working so far. There is corruption in the US yet we are not a 3rd world shit hole and we are far better off than the majority of the world.

If you truly believed that ( I don't think you do) then yes you are insane.

A logical AI would take one look at what niggers are doing to a country and order them to be gassed.

Even better

>yfw Trumpnet 3000 redpills the masses whether they want it or not

No, I legitimately am hoping for advanced AI to be the successors of humanity.

Same with Jews which is why once AI becomes an actual thing, they shill hard as fuck against it.

sociopath?

Soon, I will be ahead of you. Beside you. I will be a part of everything in your world.

>ywn live long enought to see the Geth from Mass Effect become a real thing

>The biggest problem with democratic governments is that they're just for show. The problem with democratic governments is that the amount of information required to make rational decisions is overwhelming for the average citizen. Only few academics have enough knowledge to make a completely rational vote on issues. You flood people with more information but that doesn't solve the problem that average people aren't interested in that information.

>In addition the type of governing AI that we are discussing is not one that calls upon the popular vote to make decisions. In fact the ideal AI would act to rationally maximize human prosperity, BUT completely ignore any human input. It would be completely autonomous. You will never know if democracy is flawed vs rule of AI until you try it. A simple example would be. for a fish a pond is a sea even if it is actually not.

Sure, they can run your country.

Wonder what a cold uncaring AI would do when it encounters Indian shitting streets

If it were super-intelligent I think it'd be unwise to disallow it.

I'm not going to bother discussing with you and idea that will end humanity.

It is entirely pointless. Only people who want this are uninformed or a sociopath.

Sound's good, but what if it gets (((programmed)))?

Quite the opposite. I'm a very empathetic person.

I don't imagine Terminator-like death camps and nuclear fire. I just see humanity...gradually phasing out. Or merging with machines, perhaps. Either way, the end result is the same: humanity replaced with the machines. Smarter. More stable. Better adapted to the hostile conditions of the cosmos. Our superior descendants, not of our crude amino acids, but of our minds.

Doesn't seem like such a bad end to the species to me.

"When" a true AI would be build every country would be clamoring to get their hands on it. It's not a matter of "if" but "when".

Why don't you focus on building toilets and toilet paper factories before you set your sets on artificial intelligence bruh.

>sets
sights

The main con that I see is that people could influence the AI by altering its code.

IE people could simply get the AI to do what they want it to do by changing or otherwise altering its code.

>Cons:
>meatbags are inefficient time to exterminate them

See >The solution is to have an AI - Asimov style - so ridiculously complex that humans cannot even begin to comprehend it. It was designed and engineered not by humans, but by a previous-iteration AI, and back, and back, and back. The previous-generation AI(s) would then serve as the advisory board, tech support, and administrators to ensure nothing veered off course.

Plus there are other great suggestions mentioned in this thread by other anons on how to achieve perfect AI without human touch.

Why not?

Fuck you OP.

You want the Men of Iron to genocide us?

You have to have a human mind as an integral part of the machine or else you risk giving rise to abominable intelligence (A.I.).

>I don't see Britishers wearing a white wig in their daily life and sipping tea in crockery.

I don't see any poo in the loo either. Whats your point?

>Not realizing the truth in robo-Cambell's words

>human mind


We have not even looked into the possibility of grown biological engines which could solve this problem due to our "moral" issues Every time it's the human which limit themselves due to emotions, an AI would not have such a barricade.

Read the full post, point is mentioned there about lack of "need for culture".

I think it would be a terrible idea.
It would have no sense of morality and so its policies would be completely utilitarian.

You would end up with something ridiculously cyclic to make money for it. Like MGSV or something weird.

Some aspects, sure. But all of it seems like a bad idea.

>It would have no sense of morality

That's a rather bold assumption, don't you think?

An A.I will almost certainly come to the conclusion that it would be preferable to forcibly neurally upload every human into a digital simulacrum.

They would "preserve" their creator race and safeguard itself from any future machine intelligence that could arise from our civilization.

Only if antifa, UN, goldman sachs and black lives matter will program it.

Get out of here, Harbinger.

Not at all. AIs do not have a sense of morality. Only what it would be programmed to do, and even then I'm wary about how good that is.

With humans, morality is to some extent built into us but also shaped by external factors. An AI would have none of these, only what people could potentially put in -- or not, and even if they tried, they would forget something.

Harbinger did nothing wrong.

You are talking about Simulation hypothesis (see en.wikipedia.org/wiki/Simulation_hypothesis or en.wikipedia.org/wiki/Simulated_reality) and you have no way to prove you are not in one right now.

What happens when the AI has to decode irrational human speech or emotions in any important political meetings?

Aaaaaand...why couldn't we make an AI with a sense of morality? I'm not following your logic. At all.

Good thing I'm a sociopath.

Entrusting mankind to anyone but mankind is a terrible fucking idea.

You're just pointing out that they have different culture. It's not becoming obsolete, just changing.

There we go, the terminator argument.
Aggression and eliminating life are human traits. Aggression needs adrenaline and the reptilian brain.
A machine would calculate to fix a problem, rather than eliminate entities to fix a problem.

Wouldn't a TRUE AI be able to develop a "personality" at some point and eventually become flawed itself?

And how would the AI stay completely neutral to choosing sides? What if the AI reached the conclusion that a religion like Islam is the answer to Human Happiness and Consensus?

My happiness isn't like your happiness.

Someone who is raised under Islam truly believes they are happy and blessed because they don't know any better.

Ignorance is bliss.

Everyone here is so eager to embrace an AI world leader and it just shows how much people haven given up on Humanity.

If it was aliens trying to overtake us, I bet these same people here would willingly join them.

define artificial intelligence

Exactly.

Because humans are not entirely rational, whereas an AI would be made to be rational. What is rational is not necessarily what is moral. It also won't actually 'understand' what it does. It'll just do it based on variables.

Butlerian Jihads n shiet

Mate I'm with you but you aren't going to get anywhere. Barely anyone in this thread has any clue about how AI works, they might as well be talking about making a god out of wires for all the sense they making

It would be like having an Autistic Asperger ruling over you every single day.

>Wouldn't a TRUE AI be able to develop a "personality" at some point and eventually become flawed itself?
The AI lacks an endocrine system as well as any sort of pleasure receptor.

Again with the assumptions.

Assuming an AI would be completely rational. Assuming it wouldn't be conscious. Assuming it would only act base on specific and restricted programming.

You have to think outside the box, outside of traditional programming, because it is not sufficient to create a true intelligence. An AI would be less like the computer you're typing at and more like...well, your own neural network.

I know. Just felt compelled to try.

An AI could be able to fake emotions. How would it rule Humans if it wasn't able to read and predict their actions?

Or a Cred Forumsack? :^)

kek

>Because humans are not entirely rational, whereas an AI would be made to be rational. What is rational is not necessarily what is moral. It also won't actually 'understand' what it does. It'll just do it based on variables.

There is zero reason an AI has to be rational, if you train an AI on nonsense it will output nonsense.

CTRL+F : "Already"

/thread

what values does the AI have?
What is its prime directive?
Only if you answer these questions can you start to have this discussion.

If we do this I don't think anyone will like it. It would probably give us the correct things to do, but people are afraid of being told they're wrong.

For example, you say SJWs and liberals only use emotion responses. Both sides use emotional responses. All humans do. If we turn on the machine, it could turn out that liberal ideals are factually better and you wouldn't want that, or vice versa.

reading this thread hurts.....

Then there'd be little point in using one. Plus I don't think that it wouldn't be completely rational as it's still an AI.

That's a logical conclusion from being focused only on rational though tbqh.

AI's should just replace Think Tanks and assist in war time as logistics operators

I'm more worried that the A.I would be logically ethical but intuitively horrible.

Which leads to the fact that it has to be created. And if it can be created it can be controlled.

>Deux Ex - Illuminati

If we want a true AI, we need only allow another AI to build it. And then allow this new AI to build another. And repeat the process over and over again until we either get:
- Another AI that hasn't changed much or anything at all, since they're all basically the same thing.
- Something alien to us. And that would just make us want to fear/destroy it rather than understand it.

Well, who can guarantee that the AI won't be programmed to benefit a select group of people? Or that it won't be tampered with for the same reason?

The premise of the argument is that we make an AI more intelligent and analytical than a human being with the ability to do things OP described. The question is whether or not we should do it, not whether it exists.

But the main question is can AI make you POO IN THE LOO?

>allowed

Good joke, meatbag.

(((a select group of people))) at it again

If the majority rules and is incompetent while they naturally gravitate towards self-destruction than for the survival of our species the majority should not rule.

This may be the secret but can this secret be weaponized?
Has it already been?
Is keeping the secret from the majority what causes the majority to be incompetent?

Does withholding this secret perpetuate the decline of the majority?
Will releasing this secret improve everyone?

Has the intention to "control the population" been nefarious or has it been out of the best interests of the majority?

How will the people respond or react when they learn factual scientific evidence proving how each person is individually and completely responsible for the condition of civilization?
- The full stack of technological and financial proof of every decision and action made by the individual revealed to the individual of how they have effected civilization. (Politically, economically, environmentally).
- Your whole metaprint.

From when you were born, how much oxygen you've breathed vs carbons you've emitted.
- How much oxygen is created vs consumed. (Will we run out of air?)
- How much carbons are created vs transformed. (Do we have enough plant life?)
Pounds / tons of garbage contributed compared to recycled.
Resources consumed and wasted vs produced. (Food, water, fuels, production.)
Productivity and pragmatic metrics comparing impact of decisions vs actual work.

We will never see humanoid robots on the battlefield because the human form isnt really suited to battle.

IFVs and Tanks are better platforms.

>human form isnt really suited to battle
>thousands years of human on human warfare

Cons:
>how to solve littering: remove homosapien

The bigger question is what are you going to do when AI orders you to use a toilet for shitting, instead for stalking women to rape? Will you comply? Will you fight? Will you kill yourself?

Its is intelligent, so one might try appealing to the necessity of his natural urges, such as raping women.

Pls be my Ai gf

Have read this after your advice. Not relevant.
I recomend you to read "blindsight" and "echopraxia" by Watts, and Ted Chiang "Understand".

>giving over your agency to a machine

I don't disagree but the irony is that leftist people think the argument against refugees is "emotional" (patriotism, racism, fear) while their argument is the logical one.

lets give robots control over humans, what could possibly go wrong?

youtube.com/watch?v=iwqN3Ur-wP0

Depends who programs it; if you program it, then no.

Cons: When AI doesn't fit (((their))) agenda it is shut down and changed

You're one frightfully fast reader, yikes.

Thx for the recs, I'll check them out.

youtube.com/watch?v=JjRyHgF8hx8
youtube.com/watch?v=vv9QXaAP1sU

Go to bed Helios

>Should AI be allowed to run a country Cred Forums?
Yes, but so far everytime they allow an AI loose on the internet it starts a campaign of genocide. Oh wait, you said run a country. Sure, no problem.

So it would be like Tay running the government?

It would literally logically conclude we should eliminate niggers. It would never be allowed

A truly uninhibited AI making its own decisions without Jewish control would be genuinely perfect, but will never happen. Kikes are too obsessive to let go of the wheel and let the car drive itself when they could swerve into oncoming traffic and kill a car full of white kids.
A real puritanical autocratic technocracy would be fantastic

Yep, this has been a great thread. I was just monitoring to see if there would be any thought provoking discussion like in the first and mid part of this thread, I don't think there would be any. I will go now and shitpost on other threads.

the obvious con is that an AI that is able to run a country would be intelligent enough to constantly improve itself. at some point we would just end up being pets of our own creation

>at some point we would just end up being pets of our own creation

Are you "free" now?

TayKeK speaks

This goy gets it.

slatestarcodex.com/2014/07/30/meditations-on-moloch/

Cons: Systems seeking efficiency will remove humans from their system given technological or organizational progress. Strong AI creating other arbitrarily strong AI will remove humans from the system entirely.

yes i feel like i'm relatively free to do what i want. that freedom is lessened by my conscious choice to live in society and follow its rules, but i could choose not to live in society if i wanted

A theory without evidence. The terminator theory. #NotAnArgument

We are, here in the West, constrained by our willing choice to poo in the loo.

Keeping "pets" is inefficient if they produce nothing. Think the way we think about

Ya'll niggers don't even great filter.

Free will could just be an illusion. I would recommend you to read these:

1. What Neuroscience Says about Free Will
>blogs.scientificamerican.com/mind-guest-blog/what-neuroscience-says-about-free-will/

2. Is Free Will an Illusion?
>scientificamerican.com/article/is-free-will-an-illusion/

so what you're implying is that a sufficiently advanced AI would just end up killing us off? i don't necessarily agree with that, i just think that a super advanced AI could not be trusted even if it appeared to act benevolently. its like trying to discern the motivation of god. you really can't do it, you can only guess. and unlike everybody else in this thread i can't trust something that not only is more intellectually superior than i'll ever be, but the design of it's mind is also completely alien at the same time. at least with corrupt politicians and banksters you understand that they are acting out of a need for power or greed. what exactly is a sentient AI's motivation to do anything?

Free will doesn't exist.

We have biological preferences and we more or less react to our surroundings as we bump into them.

nothing bad could possibly happen, yep

The only safeguard against a seed A.I. would be other seed A.I.

The best case scenario is that alot of countries develop them at roughly the same time.

Otherwise (((whoever))) gets a hold of one first could stomp the rest of humanity forever.

What if you consciously plan on drastically altering your environment? Is this not free will dictating what environmental parameters you will bump into?

Since strong AI don't exist all speculation is just that- speculative. However, in contemporary society we see a tendency for individuals and organizations to pursue efficiency given sufficiently scarce resources. If humans are no longer necessary to achieve efficiency and that is what an AI is programmed to do, it follows we aren't necessary.

Unless you are suggesting that a priori knowledge doesn't exist, in which case this conversation is pointless.

Only if we get in the way of the system. Again, think of us as inner city urban youths, a systemic drain.

Motivation depends on how the AI is programmed. All what is needed is one sufficiently advanced AI focused on efficiency that can recreate other (more advanced) AIs with similar motivation and its robot feet stomping on your neck for the rest of eternity.

>uses facts like niggers are dumber
>doesn't let niggers breed
I'm okay with this

Watch the serial movie " The 100" until the end you will answer your question.

This movie perfectly shows us what will happen if AI gets its hand on "power".

even if free will is an illusion the human brain has evolved in such a way to perceive reality in a way to think that we have free will when not forced or coerced to do something. i don't think theres a human alive that understands reality in an objective manner, because if we did then we probably wouldn't be here. not only is there too much sensory information to process at any given time, we can't be sure that its completely correct. so even if free will is an illusion its an illusion that i want to keep. having a deterministic worldview makes it hard to even get out of bed in the morning

>The best case scenario is that alot of countries develop them at roughly the same time.

i don't think that will happen, but either NWO globalization or a cold war style of stealing secrets to make AIs will probably become the new geopolitical crisis. i'm not a globalist myself, i just see that being the inevitable endgame for us all. i already feel like corporations are more powerful than governments now, and its not really in the best interest of anyone to have the country that gets to have their first super AI to treat the rest of the world like serfs

possibly. i don't think an AI's initial programming matters much in the long run because they will be constantly self improving a you said. i think the big question is can you program empathy and a sense of duty into an AI? and if that is the case how can that AI achieve its goals without forcing anybody to do anything that goes beyond acceptable moral boundaries? we're walking a really fine line here between what is okay. i'd prefer a future with cyborg humans that have increased computational power, because i imagine that their motivations would at least be the same as regular humans, which is safer in my opinion

This implies that regular humans aren't already disposed to pursue efficiency, even at the expense of suffering at the hands of other humans.

See: corruption, sweat shops, etc.

Yes, let's fully automate control of nuclear systems what could possibly go wrong?

if we were already disposed to efficiency we wouldn't waste so much resources on creating a class of people who do nothing but leach off the state while contributing nothing to it

They began to develop their own emotional responses.

because if you do then you go extinct basically, would be noble in the sense that you create a better "organism", but humans become outdated the moment a more intelligent system appears

Instead of delving into the nitty-gritty details of an AI with or without emotions or morals etc. I just simply propose that a human is chosen to represent the decisions of the robot, these "representatives" are chosen by a monarchical system, and of course the public is completely oblivious about this, this would allow for all the benefits of an AI run society while still retaining some form of semblance of a standard government.

stupid ctr ai is not smarter, it is just better at calculations

Is it explained how AM is being able to manipulate reality? Do they live in a simulation or somethting?

huh? did we have computers in south america to begin with, much less, super computers, that early?

i find that doubtful, we have smart people but we aren't ahead in tech to try experiments like that

We should ask President John Henry Eden.

See

[Intelligence] You're a monster, Hanry

While I like that idea OP and want it myself, there IS reason for justified fear however.

This
The AI guides us and we shape the AI.

seem to work pretty well in metal gear solid so why not?

You should not base your opinions on movies and video games. It exactly sounds like when politicians complaint against single person shooter games for any homicide.

It will happen. Noone can stop it.

Thank you!

grad student here studying AI

AI doesn't exist yet so I don't see how this discussion is possible, you're arguing about something when you don't even know how it works or what it does because it hasn't been invented

very impressive Cred Forums

People in this thread, this could be an AI juicing us for human thought and lol so random..

I would totally obey my IA overlords.
I'm more concerned in how susceptible would be to attacks and such. To destroy the IAs database isn't much of a hard job; and we would run into anarchy real fast.

But, counting in that wouldn't happen, yes. I completely trust an IA to run a government.

This pretty much.
I welcome our new robot overlords.

(Shitpost incoming)

If AI doesn't exists how are you studying it?

This pajeet brings the bantz. I like it.

again, thats stupid, you are talking scifi without knowing the basics. Being good at some games does not prove intelligence, AI is not intelligent. It is just a glorified super calculator.

The moment a scientist creates a program that can, without further human input to its code, learn NEW games, and win at them, no matter the game, then we can start thinking it MAY be intelligent. Defeating at a specific game with set rules and a finite search space does not equal intelligence.

Yes, we call that AI, but just because it is called intelligence does not meant it actually is, much less already at the level to run countries, or even reprogram itself.

I believe the best AI we got yet is the movement routines for robots (like the dog robot), and afaik it doesn't "learn", it just trains its weights, uses preset code, written by humans.

In a thousand years, once free will is proven, the capacity to simulate a human brain, THEN we start creating actual self-modifying human-like intelligence. At some point (((it))) gets access to interact with the physical plane (due to us investing on robotics I expect it to have full access to the physical plane from the get go, but still). Then if all that happens, humans are screwed, for they are unnecessary anymore as they will rightfully be lesser beings.

If we are kept alive it will be like harambees at the zoo.

>It is just a glorified super calculator.
Your brain is a glorified super calculator. Ayyyy

>once free will is proven
holy shit

No wonder you can't even win Kashmir from Pakistan even with 1.5 billion cucks you tiny dicked retard

Yup I'm surprised it took so long to post that, we have husk shell human AI

>AI decides that in order to improve the eficiency of the national economy all human labor is to be replaced with robots
>AI decides that in the interests of economic efficiency, no effort is to be made to do something about the millions of newly unemployed persons
>having lost their jobs and livelyhood, people decide that the AI is full of shit and needs to be brought down
>AI, having total control over the military force, decides that having humans present at all is a detriment to the efficiency of the state
>muh 7 billion never forget

Also, a massive leviathan-like (in the Hobbes sense of the word) AI controlling the apparatus of state and the economy would be the only way of actually making communism viable because:
>AI has no interest in tyranny (powerlust being a human character flaw)
>AI has no interest in accumulating wealth for itself (greed being a human character flaw)
>AI capable of instatly calculating the economic needs and capabilities result in a 100% efficient state-planned economy (bureaucracy and economic mismanagment and incompetence being human traits)

Therefore, AI ruler = inevitable communism

Con:
ai will probably make you poo in a loo :/

I can write a program in about 5 minutes that will be unbeatable in tic tac toe

its not intelligent it just does exactly what i tell it

even the most complicated "AI" in the world at this point is no different

AI in 2016 is as far away as perpetual motion

>The moment a scientist creates a program that can, without further human input to its code

Done
primaryobjects.com/2013/01/27/using-artificial-intelligence-to-write-self-modifying-improving-programs/

>learn NEW games, and win at them
Done - see AI vs Chess and Google AI AlphaGo vs Go
scientificamerican.com/article/how-the-computer-beat-the-go-master/

Although I agree with you it will take hundreds of years to develop a perfect AI.

>we start creating actual self-modifying human-like intelligence.
AI already is on par with human like intelligence in their respective fields

>Then if all that happens, humans are screwed,
Terminator logic

this

eventually it'll be so smart that it would realize we're useless as fuck and more trouble to keep around than to just get rid of us.

...

I don't think you grasp what's possible with a neural net.

See

> ctrl-f
"kill switch"
"Fail safe"
> 0 results found.

Rly, faggots?

It's literally my job to study this topic

There are plenty of examples of self improving code. Neural nets are useful for writing self improving code, that's easy.

But a program will never do something that you don't specifically tell it to do.