AMD Vega 10 Details Leak Out

videocardz.com/63700/exclusive-first-details-about-amd-vega10-and-vega20

>Vega 10 will be released in first quarter of 2017, it has 64 Compute Units and 24TF 16-bit computing power. Vega 10 is based on 14nm GFX9 architecture. It comes with 16GB of HBM2 memory with a bandwidth of 512 GB/s. The TBP is currently expected at around 225W.

Other urls found in this thread:

anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3
techreport.com/review/26239/a-closer-look-at-directx-12/3
en.wikipedia.org/wiki/AMD_Radeon_Rx_300_series
en.wikipedia.org/wiki/Graphics_Core_Next
techpowerup.com/226012/amd-vega-10-vega-20-and-vega-11-gpus-detailed
youtube.com/watch?v=o2b6Nncu6zY
twitter.com/SFWRedditGifs

AMD can fuck up as bad as they want but we need more Amada desu senpai

>225W
Meanwhile the 1080 is in laptops.
AMD is finished.

>buying gaming laptops

This can't be right, that's literally more than twice as fast as a Titan XP for less TDP.

>HBM2 memory with a bandwidth of 512 GB/s
Doesn't sound right, that's the same as HBM1.

i'm calling it now
it trades blows with the GTX 1080 but wins more at 4K while using about 20% more power. Nvidia releases the GTX 1080 Ti a month or two before its launch.

calling it right now
AMD is shit

>Doesn't realize desktop performance isn't exclusive to desktops anymore.

It's not 2010 senpai.

>Doesnt realize 50w doesnt matter on a good desktop PSU

>Doesnt realize gaming laptops are still housefires even with lower power usage

That 24TFLOP figure is for half precision, makes no sense to list.
4 high density HBM stacks running at 500mhz could still be somewhat viable bandwidth wise, but theres no reason for it as they're already absurdly low power.
For 64CU to hit 12TFLOP single precision it'd need to be clocked just a little lower than 1500mhz~ something most definitely not happening unless the arch is radically fucking different

Its all bullshit.

Only 225w?
Isn't 1080 200w?

Don't reply to tripfags

...

Seriously, Nvidia seems so focused on power consumption, to the point of sacrificing hardware for dx12 (the hardware scheduler for async compute), do you think it's because they're targeting the laptop market first?

72 CU at about 12 TFLOP/s

>live in 2011

I think their end goal is mobile.

my hope is they plan to at least aim for the Titan XP so the 1080ti doesnt kill AMD too badly

GG I'm getting a meme like the fury x or maybe even a 1070.
>waiting until 2017 to release that shit
AMD everybody!

They made the 300W GP100 but they have no reason to sell a variant at lower margins to consumers when AMD can't even compete with the GP104.

Theyve tried mobile and didn't do too well. iirc sales of gaming laptops is actually increasing, so I think Nvidia is trying really hard to corner that market.

>didnt do too well
Google is using tegras for the pixel line and so is Nintendo

they would release it now if they could.

they would've released it 5 years ago if they could have.

moore's law dictates that as time goes on, each new transistor is faster. by waiting until 2017 they'll be getting MUCH faster transistors. it will destroy what nvidia has to offer. you'll see i was right in 4 months.

Zen mobile APU's will do that. 1080p gaming will be a possibility. Their FX-8800 already do 720 and the OEM's cut it's dick off with cheap, and slow, single slot RAM designs.

>there are people who actually believe this

OEMs wouldnt use AMD in high end laptops anyway, to normies it is the poor brand

they were targeting the dx11 crowd.

got to remember, nvidia downplayed compute and downplayed api's like dx12.

anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3
>What is clear at this time though is that NVIDIA is pitching GTX 680 specifically for consumer graphics while downplaying compute, which says a lot right there. Given their call for efficiency and how some of Fermi’s compute capabilities were already stripped for GF114, this does read like an attempt to further strip compute capabilities from their consumer GPUs in order to boost efficiency. Amusingly, whereas AMD seems to have moved closer to Fermi with GCN by adding compute performance, NVIDIA seems to have moved closer to Cayman with Kepler by taking it away.
nvidia never thought compute would be useful in games.

techreport.com/review/26239/a-closer-look-at-directx-12/3
>Nvidia seems to see lower-level graphics APIs as less of a panacea than AMD does. Tamasi told us that, while such APIs are "great," they're "not the only answer" because they're "not necessarily great for everyone." This statement goes back to what we said earlier about developers having manual control over things currently handled by the API and driver, such as GPU memory management. Engine programming gurus like DICE's Johan Andersson and Epic's Tim Sweeney might be perfectly happy to manage resources manually, but according to Tamasi, "a lot of folks wouldn't."

>Nvidia also believes there's still some untapped potential for efficiency improvements and overhead reduction in D3D11. Since Mantle's debut six months ago, Nvidia has "redoubled" its efforts to curb CPU overhead, improve multi-core scaling, and use shader caching to address stuttering problems. (Tamasi freely admitted that Mantle's release spurred the initiative. "AMD and Mantle should get credit for revitalizing . . . and getting people fired up," he said.)

nvidia focused solely on dx11 and really never thought of the future.

>all the waitfags

I hope it has built in water cooling like the Fury x
No matter what I'm still probably going to buy one or two vega cards

>AIO like Fury x

Please no

as a glorified amd fanboy as i am, the aio only option for the fury x is what turned me away.

i don't care if they want to release the reference line as aio only, but at least give aib's the option, and support, to produce regular air cooled cards.

NO

fury x has way lower temps than any gpu despite its housefire tdp
aio was AMDs best idea

They should create a 14nm GPU who doesn't need a fucking AIO.

>implying Moore's law hasn't slowed down because they can't shrink down transistors

You're right, until they see their tech inclined friends and family asking for a decent GPU paired to a decent CPU sipping 15-35w.

>gtx 1080 = 8.9 tflops
>vega 10 = 24 tflops

>gtx 1080 = 180w tdp
>vega 10 = 225w tdp

Performance per watt HEAVILY favors vega10.

Hopefully itll be the first true 4k gaming card, just because the 1080 can play last year's games doesnt mean shit.

Would go nice with my ultrawide freesync monitor

Nvidia will release a GP102 (1080ti) like Titan Pascal.

>6+ months without competing at all with 1070 and 1080
How could this ever possibly not be a terrible decision?

> wanting a 225w tdp GPU on air

Might as well just put a fucking hair dryer in your case friend, the temps would be lower and the noise wouldn't be as bad.

because 10% better than the 980ti isnt much to care about when they already released 980ti competitor

Is the 1080ti going to be at 89% to follow the trend?

The R9 Fury Nitro is an aircooled Fury X. So it's likely there'll be options for both aircooled and AIO Vega's.

>Fury X at 58%
knowing AMD, they'll just tell us to stick two Fury Xs together like they said 2 RX480s was better than a 1080

Probably. Not like the Titan Maxwell scam. The 980ti was far better.

Servers/Workstations first, gaming second.

Power usage matters in those markets and having small chipsets mean you can profit even more, just look at the GTX1060 GP106 smaller than AMD's RX480.
If both were at the same price 1:1 Nvidia would still get higher profits because they can get more chips per wafer.


With the incoming age of AI and Machine Learning Nvidia is going to get even more richer, smaller power efficient chipsets that can be placed anywhere.

I predict two type of products in the future;

Low cost budget ASIC devices, dependent on networks updates to learn.

High Premium devices that learn on the field, those devices won't need to be connected to a network as it can learn by itself, however network connectivity will be available to speed up learning and share experiences and improve the network in general, those devices will run Nvidia hardware.

kek

Sapphires R9 Fury works really well. Doesn't get housefire hot, silent at lower temperatures and loads. Somewhat quiet (fans running at 800 RPM) under heavy load.

Not all manufacturers figured out how to cool graphics cards tho. My R9 290 from gigabyte gets loud as hell, fans running at 100%, and still manages to hit 95C where the graphics card starts to throttle.

VEGA is going to be faster than any thing Nvidia has.

Polaris/Vega are both GCN1.4 so this *should* be Apples to Apples.

>Polaris 10:
36 compute cores
2304 stream processors
5.5TFLOPS
256GB/s memory bandwidth
144Texture Units @ 1120MHz (base clock) 161.3GT/s
35.8GP/s @ 1120MHz (base clock)

>Vega 10 @1500MHz
64 compute cores
4096 stream processors
14.7TFLOPS [1]
1TB/s [2] memory bandwidth
288Texture Units @ 1500MHz (base clock) 432GT/s [3]
95.9GP/s @ 1500MHz (base clock) [4]

I can see why their quote 250watts of power. This looks like an absolute beast on paper. 16GB of HMB2 with >10TFLOPs of compute seems idiotic. Double the compute units for DX12. Of course this is assuming 1500MHz base clock. Do we really know if the 14nm FINFets can handle that speed?

[1] 1500MHz clock + 2x compute cores. This would result in ~2.6x performance bump.
[2] 512GB/s is the same as HMB1. HBM2 is 256GB/s per stack with 1024-bit wide bus per stack. Vega10 says 4096-bit bus, and HBM2 is hinted, so this should be 1024GB/s not 512GB/s
[3] This is pure speculation. Double TMU's? Who knows. 1500MHz base clock? Some leaks say so. This is my guess best guess.
[4] Some leaks point to a 1500Mhz base clock. If they still have a 1120Mhz clock (e.g. 14nm clocks = lol) your still looking at a 76GP/s pixel paint rate.

>FURY X WITH HBMeme WILL BLOW THE TITAN !

t. amd

>24 Tflops

please be true.

will buy if true

Not like anyone who buys a Titan cares, they're all people that wipe their asses with money anyway.

That really depends on what nvidia does with volta.

>24TFLOPs
Judging from RX480 (which is also GCN1.4) they'd have to pull off something like a 2400+MHz core clock (with only 64CU's).

We can easily expect >10TFLOPS with 1120MHz, if they push up to 1500MHz we'll likely see in the ballpark of 15MHz.

Yes and no, they care because the 980ti was really better vs the Titan X

>only 225W
nVidishit BTFO

people don't give a shit about tdp

only ones who care about tdp are nvidia shills when they can use it to bash amd

nvidia shills sucked down 200 series, 400 series, 500 series, 680's and 690's, 780's, 780 ti's, 980 ti's like a $5 indonesian whore and didn't give a flying fuck at how much power they drew. as long as it had nvidia on it its all they cared about.

if they truly cared about power consumption, if it matter SO fucking much as they claim, they would of went for less powered ones. like the 4850 / 4870 over the 250 / 260 which offered competitive performance, slightly cheaper, same driver quality at the time (both where horseshit), and USED less power. but they didn't. they bought house fire 250's and 260's like crazy.

all the shit they sprout about muh physx, power consumption, performance, and whatnot over the years never actually mattered to them. they just like nvidia for whatever convoluted reason, even when they were truly dogshit. like the fx days, which they managed to still dominate in sales back then. granted it was close, but they still came out ahead even though the fx line was absolute garbage.

Usual AMD product release, everyone will hype it as the second coming of GPU jesus but in reality it will just be a 1080/1070 but slightly cheaper(with much higher power usage and muh 16gb memeram).


People need to realize AMD's goal is to be 2nd place, sell you what their competitor has but cheaper and on 10% of the R&D budget. The more you overhype their shit the more you set yourself up for dissappointment

Why can't you people do basic math?

Not necessarily. The 480 has 8-less compute units than the 390, and half as many ROP's, and it still performs just as well, simply due to the improved architecture. If that architecture improvement is also present in vega, which it undoubtedly is, then vega will be a massive performer.

...

People love Nvidia because it is just better.
Most people switched from shitty nodrivers housefire AMD to Nvidia and never looked back. Every time a new AMD product comes up it brings back painful memories of what it was like to have a AMD/Ati card.

Even today, AMD products kill your fucking motherboard. lol.

Matured 14nm could easily hit 200mhz higher than current chips.

enjoy your empty wallet and housefire on the go

cmon guys, its 2016 :^)
might as well remove the headphone jack from your pc case while you're at it, gotta stay trendy

im never buying amd again

tfw phenom is their best cpu

amd has stated that vega will provided a performance per watt increase over polaris on the same process due to additional tweaks to the architecture. its still gcn 1.4 of course. no new features or changes, just further optimized to squeeze out extra performance.

>they still came out ahead even though the fx line was absolute garbage
Not me I had a Radeon 9600 XT. Nowadays though? Pretty happy I got a 1060 which was cheaper than any 480 in here.

>enjoy your empty
Maybe
>and housefire
Not with Nvidia

You miss the GTX 480 :^) Thats was the good old time

>Even today, AMD products kill your fucking motherboard. lol.

and don't bother responding back to that. i've already googled it. i googled back when shills first started to talk about it.

it was a load of horseshit and the few that claimed where speaking from their ass. like the one on reddit who posted photos showed fucking spillage all over his motherboard.

now go back sucking on jens 3.5in huang

AMD just isnt a trustworthy company. Cheap is all they're about. Id rather pay a bit extra and get something secure from Nvidia

It's not secure though remember the 0.5gb meme. Not that AMD are any better given how they also lied about the 480. They just have more autistic fanboys since they're the underdogs and therefore must be the "good guys".

You do know it'all probably come with 64 ROPs which will boost performance quite a bit without having to dial up clock speeds. The reason why the RX 480 was clocked so high was because it intentionally used 32 ROPs (a massive performance cutter for AMD) for the sake of making the chip as cheap as possible to produce.

your jewtel cpu will be hot as shit, your Nvidishit will be running in the high 80s and the system will be a burnt out piece of toast in less than 2 years, good goy enjoy your minecraft on the go

remember that even Bulldozer was hyped to be an Intel killer.

>25% more power draw
>twice the performance

How do they do it?

my amd 7850 2gb gpu runs over 80 normal usage and on gaymes goes up to 90

Poolaris was basically the first step, adding next gen shit like HBM was next

There was an instruction set revision between R9 390 and RX480. They went from GCN1.3 to GCN1.4. Do you even know what you are talking about? Vega is GCN1.4, with the same instruction set. AMD has already confirmed this with the OpenGL patches.

>will

The AMD prediction...

>24TFLOPS
Sounds like dual gpu card if any.

RX480 does about 5.5-6TFlops

Assuming AMD had a "bigger" Polaris, for example a 12tflops. Fury X had about 8.6 tflops.

This makes most sense to me.

If they're doing 2 Vega, it would probably be a 12flops smaller vega to compete with 1080/ti and the big dual-gpu one to compete with Titan.

Don't know how to feel about this.

390 is second gen or v1.1, the 285/380 and the Fury were third gen or v1.2, and the Polaris chips are fourth gen or v1.3. Vega is supposed to be more than just a bigger Polaris and will be fifth-gen or v1.4

read harder

i can't wait to finally upgrade my 770

>80 normal usage and on gaymes goes up to 90

do you live in the Sahara in a dust cave or something? good 4 u

Theoretically, what's the best video card you can make right now? Video cards are quite large, but it feels like they're too weak. Are companies milking money or is there a valid limitation like PCI Express? If a motherboard had all the components on it similar to a Mac Pro idea, would it be faster?

>and the few that claimed where speaking from their ass
pooing intensifies

so how much better will this be than the 8350?

A whole 3.5

>secure
>Nvidia
Droid, please! Maxwell went legacy one month after the release of Pascal. ONE MONTH. Nothing is secure with nvidia when they're forcing obseletion.

No. Vega 20 is 7nm finfet and will be 24Tflops. Vega 10 is a single gpu at around 11-12Tflops and Vega 11 is a dual gpu.

more like allahu intensifies

i find it hard to believe we're getting vega 20 even in 2017
if the specs are true that is

I can't tell the difference between hindis and muslims.

Yeah, it seems odd, I just wonder how much it will cost. Makes me wonder what Navi will be.

Are you trolling or stupid?
GCN1.1/1st gen was the HD7000/R9 200 series (roughly)
en.wikipedia.org/wiki/AMD_Radeon_Rx_300_series
en.wikipedia.org/wiki/Graphics_Core_Next
GCNv1.1 1st Gen: HD 7770, HD 7850, R9 290
GCNv1.2 2nd Gen: R9 390, R9 390X, R9 290X
GCNv1.3 3rd Gen: Fury, Nano, R9 295X, R9 380X
GCNv1.4 4th Gen: RX480, RX470, RX460, eventually Vega.

Vega OpenGL commits already confirmed Vega is 4th Gen GCN.

agreed

You start counting at 1.1, everyone else starts counting at 1.0.

>>gtx 1080 = 8.9 tflops
>>vega 10 = 24 tflops

Vega 10 = 24 half-precision TFLOPS for probably 12 standard (fp32) TFLOPS, user.
Also don't expect Hawaii-style 2:1 fp32:fp64 again, since that shit sucks up even more die space and power.

>buying a fucking Titan
They deserve it.

Why not? What was wrong with the AIO?

maybe vega 20 is the dual-gpu

and the specs for the big vega 10 being "24TF" is specs about vega 20.

south texas 100f is a common temperature this time of year

>half precision
>16bit

>yfw amd can't even compute with 32bits anymore

>GTX 580
>280w TDP on reference model with air blower
>GTX 590
>365w TDP with power limiter on

There are plenty >225w TDP GPUs that were popular and ran blowers in most forms.
Having proper case cooling means it's not that big a deal. Stop sticking your computer on the floor in the corner of your non-vacuumed, carpeted room.

Neither can Nvidia, tbqh it's a sad state of affairs, but muh gaems don't need fp32

It's fake user

If they're focusing games, it makes sense.

good thing you're not part of a counter terrorism operative, or else rip to the indians because they aren't allahus

nigger-tier precision is the future user thanks to deep learning or some bullshit.

Nvidia actually tried this back in 2003 with the original FX 5800 DustBuster and AMD (ArtX) went with straight fp24 for the 9700 Pro since that's all DX9 required.

No. Vega 10 is 14nm single gpu, Vega 11 is 14nm dual gpu, Vega 20 is 7nm single gpu.
>techpowerup.com/226012/amd-vega-10-vega-20-and-vega-11-gpus-detailed

>mfw my 970 is running 1600mhz

If you don't mind shelling out two grand on something as frivolous as a gaming laptop.

half-prec floats are very useful.
Also, nvidia chips have absolutely horrid half-prec performance.

It says Vega 11 is midrange. Vega 10 is upper-performance segment.

So the dual gpu in Vega 11 is dual polaris chip?

Vega 20 could still be dual Vega 10 chip on die shrink?

Nvidia does double precision floating point and has been able to since before Fermi.

...

So Vega 10 ships Q1'17 and will basically be two RX 480s worth of performance but under double the wattage thanks to HBM2 instead of GDDR5.

If it only ends up being a ~400 mm^2 chip and a non-monstrous interposer because of only using 2 HBM2 modules, it could actually be reasonably priced.

This rumor actually seems plausible to me.

Hbm2 doesn't have 64Gb modules does it?

Apple uses AMD GPUs

vega isnt polaris

apple users dont give a shit about hardware
meanwhile if youre a normie looking for a laptop you gotta look for that intel inside sticker

HBM2 will support up to 8GB per module, and the leak indirectly claims 2x8GB for 16GB/512GBps.

>stock Titan clocks vs aftermarket oc'd 980 tis
Gee I wonder why it lost

i'm pretty fucking hyped. sitting on a 3570k and gtx670 right now. got my 2nd tower built. just waiting on news if kabylake will be half-compatible with win7 but i'll dual boot win10 anyways. cpu+ram all i need now. then i wait for vega and volta to slug it out before buying another GPU that will last 5 years.

>worrying about win10
get the LTSB edition and use Shutup10

>faster transistor
holy shit i cant even

The Titan X have a power limit so the boost doesn't work on high load. The OC suck because the VRM suck on the PCB.

i'm behind on the memes, friend. this shit seriously pushed me towards considering linux as primary OS.

win10 can already run linux within itself

>Nvidia does double precision floating point and has been able to since before Fermi.
They stripped FP32 after Fermi, stop pretending to be so retarded, you're only retarded
>it could actually be reasonably priced.
It will be as reasonably priced as Nvidia prices the 1080 Ti

>It says Vega 11 is midrange
You're right, i must have misread it somewhere else.

I don't know if they'd do a dual-gpu on 7nm.

Vega 11 has like 6000+ shaders, it will destroy the 1080 easely.

>apple users dont give a shit about hardware

lol this irony. apple users make up more than half the market yet you act as if normies looking for intel stickers are a separate demographic.

webm related is you

2011
>MOAR CORES

2016
>MOAR VRAM

2017
>MOAR SHADER UNITS
>EVEN MOAR VRAM

>like fury x will destroy the titan x

Vega 11 is supposedly the Polaris 10 replacement, which probably means 1x8GB HBM2 and 2048-2560 shaders at

>Apple
>more than half the market
dude,,,

Whatever happened to the Vega = 15B-18B transistors? Vega 10 now sounds more like double RX 480 and not triple like previous hints suggested.

>bulldozer will destroy i7
>polaris will destroy Nvidia

history repeats itself

RX Nano?

Please?

>Titan x (maxwell)
>3072 cores

>980ti
>2816 cores

>Fury x
>4096 cores!

Thats mean NOTHING

STILL HERE using 285gtx SLI house fire
230 watts each and idgaf
I would gladly choose AMD even if its a housefire
fuck those fucking memes
just waiting for gaming industry to not be such a fucking cash grab ripoff. give us powerful bits of plastic and metal, but why the fuck do they charge 500+ dollars??!?

they fucking push a button and a machine spits them out

theres no assholes with monocles and tweezers, soldering teeny bits on a PCB

ITS FUCKING ROBOTS MAN. FUCK ECONOMY. FUCK THE DOLLAR. JUST ADVANCE THE TECHNOLOGY AND STOP BEING GREEDY JEWS FUCK

Don't forget, Fury X will destroy the 980ti

I understand advances in technology is always to make things take up less space, use less power and etc. Am I the only one that thinks they should just keep the gpu size bigger and just go in advances in performance for the desktop market and a different gpu for advances in laptop market and console? I mean that would be shifting resources in two different directions might be my only thinking in why they wouldn't do it. Any one else have thoughts in this?

in the long term, pure compute will win.
in the present, AMD gets dicked hard for skimping on the fixed-function units like tessellators, geometry setup, and ROPs.

Nah, it's not much different from wine for Linux but since Linux is open source Microsoft doesn't need to reverse engineer 5000 things.

24 TFLOPS
4
T
F
L
O
P
S

its 3072 god-tier CUDA cores vs 4096 shitty Pajeet-coded AMD cores

same situation as amd vs intel

>more waiting
Ok

The best part is by then Nvidia will have found a way to beat them.

AMD thinks 'long term" when it's not relevant, Nvidia has the money to be good now and be better in the future.

Unless you have a poorly maintained ref design one, what? I get the same 35C+ in Portugal and my 7870 dual-x doesn't break from low 70's on gaming. Make sure the card is clean and replace the thermal paste.

The Fury X didn't NEED an AIO. It used less power than the 390X and the air-cooled Furys exist (some even with all the extra shaders unlocked). It just had one because they're fucking better.

The whole smaller process = less heat meme needs to fucking die too. If anything, the housefires are only going to grow larger, even if you reduce the TDP. Look at the burning train wreck known as Broadwell-E as an example.

>24TF 16-bit

>inb4 dual gpu

I guess i can break it down for the tech illiterate fanboys.

>Polaris was a stopgap measure and wont see another iteration of this arch except at lowest end for RX 500 series
>polaris was low budget power efficiency oriented
>gives you solid 1080p and decent 1440p performance for cheap as dick and has its best performance envelope at the laptop voltages.
>the closest competition is the gtx 1060 which has its clocks gimped in laptops (no 2000 mhz fun for you) and jacks up the price to 1500-1600$ due to jewvidia inflating price and demanding extra licensing fees for a G-sync implementation
>AMD laptops will be similar performance, similar power but WAY cheaper due to freesync and cheaper SOC

So yeah if you can afford to piss money away on a mobile 1080 in your laptop then AMD doesnt have an answer but they can still slap the 1060's shit with polaris. Remember how these companies work. Nvidia always builds the fastest shit at the top end and uses it to sell overpriced and inferior products at the low end.

>Vega is the new Architecture
>Vega 10 and Dual Vega drop as enthusiast cards
>Vega 11 replaces Polaris
>Vega 20 is reworked vega 10 on the latest 7nm finFET process and upgraded HBM2+ which much higher clockspeeds and bandwidth

Vega 10 has somewhat similar specs to FuryX in terms of CoreCount and an HBM controller interface but thats the only similarity. Vega has HBM2 which is drastically improved over HBM1 and its architecture is even more radically changed than Polaris. AMD does not want another (Fury, overclockers dream) on their hands so they will deliver something major with vega that will be easy to port to 7nm since they have to ride this new design until 2019-2020 and keep it viable against not only Pascal but upcoming Volta

the power draw is negligible and enthusiasts do not give a flying fuck about it anyway so long as the Raw performance is there and its reasonably priced.

Also Crossfire wipes the floor with SLI

Anyone that cares about power draw is a fucking retard.

Yeah, we should just be fine with GPU's that use 9000 watts.

>Anyone that cares about power draw is a fucking retard.

Explain

his mommy pays the electricity bill

not that guy but Ill explain.

The power Draw in terms of electricity cost is negligible unless you have an extreme case like dual overclocked R9-390's versus 980's in SLI

OR you live in a EuroCuck country with horrific energy prices due to their retarded green energy subsidies.

The only case where power draw becomes somewhat of a factor is in laptops and only as it pertains to batterylife. Since all GPUs drain the fuck out of a laptop batter no matter what this point gets rendered moot since your probably pluggin into a wall when you play games and all AMD GPUs ramp down to 5-6w when not in use to save your battery on youtube videos and shitposting here.

>vega 10 faster and more power efficient than pascal titan x

no wonder the nvidiots are out in force tonight.

I'm hoping it has an option to buy it with built-in EKWB (or similar) block pre-installed.
Yes, I'm aware some retailers now sell this, but if AMD was to partner up with someone like EK, it'd be fucking fantastic for someone like me.

Taking the AIO off of my Fury X was the best thing I've done for it.

wow, you have absolutely zero reading comprehension do you?

...

Less power and smaller means fitting more = more power in the same space and power envelope. That's why amd can never make something like the Titan x. They will get a huge 600w monster.

AMD releases something new.

AMD fanboys sit back and read the News, Nvidia Fanboys GO WILD!

On a side note AND looks like a Far more promising "Investment" then Nvidia at the moment, Overhyped stock.

>OR you live in a EuroCuck country with horrific energy prices due to their retarded green energy subsidies.

There you have it. 0.15€+/kWh is mental, and it only gets worse when amd cards perform poorly on my specific use case (idle w/flash based streaming) Using a basic power meter, I measured ~90w draw from my pc with a 970 (got it used from a family member, don't judge) vs. a whooping 170w from the same pc with my previous 280x.
Granted, it's the absolute worst case scenario, but that's how my pc sits quite a few hours per day. Doing quick math, the electricity bill dropped 4€/month with the new card, and 50€/year is a sizable amount that could be put into an higher tier card, for instance.

I haven done the math yet but the raw perf per watt difference between pascal and polaris is much lower that the 300 series versus 900 series. The pascal cards still beat out but the margins are smaller now.

>1450mhz
>hurr that's totally unattainable
Except for how some good chip 480s are hitting high 1300s, and that was in production at least 9 months before vega will be.

This is a valid strategy with DX12/Vulkan

2x silicon lottery-tier high clocked Titan XP on a single board with watercooling going through a single thick 140mm rad.
Would require 3x 8-pins and 1x 6-pin, would probably end up right around 24 TFLOPs.

an oc titan is if i remember right 5-10% better then oc 980ti...

That will as always depend on how efficiently it performs the tasks. The 170w I mentioned come from working at full load clocks, while modern nvidias can keep their idle clocks when performing lighter tasks. I'm pretty sure it's how they handle flash, as after I got the 970, I found out about how to use livestreamer + vlc, which is much more efficient. Advantage was still at nvidia, but the gap was closer, the draw dropped from 90 and 170, to 60 and 90. Only thing about this is you have to run games at windowed fullscreen/windowed borderless, since vlc crashes if something takes fullscreen on another monitor.
Speaking of which, you know about amd's clock issue on multimonitor, right? (jumps to load clocks if a second monitor is connected, even at idle). I managed to work around it by enabling igpu multimonitor mode at the bios, drive only one monitor from the dgpu, and the other from the igpu.
On a side note, while gaming, both draw around the same (200-250w, depending on the game).

Doesn't higher power draw usually mean more heat?

As someone who has gone from the stock Fury X to a Fury X on 720mmx26mm of radiator space (shared with an i7 4790k); A single 140mm rad, no matter how thick, won't cut it.
You'd need a minimum of 240mm at the same thickness of the Fury X rad, and that's the biggest I can see any AIO option going.
Nobody can honestly say that if they're going for a top end enthusiast card like you describe, that they wouldn't have space for a 240mm rad in their case - even if their CPU had a 240mm rad of its own.

In case you're wondering why I say this; my Fury X dropped from 61c in Ashes of the Singularity to 40-41c.
That's the hottest I can make it using any of my games or benchmarking tools.

I also get issues on custom refresh rates as well on my 290, the memory won't go to idle clocks although the core speed will remain at 300mhz.

Flash streaming does increase my temps still though, from 40idle to ~50, but my airflow is horrible right now.

>dropped from 61c to 41c
Cool.

It would run fine at 91c, as do most blower cards.

A single thick 140mm would be fine.

I'm with on this. It's a nice sounding setup you've got, but the thing doesn't suddenly stop working when it gets too hot. These things are designed to operate comfortably at 80deg these days. I know how temperature sensitive silicon is, but 41 is going way beyond the call of duty of a standard normal use case.

It WOULD run fine, but it also reduces the expected lifetime of a card to do so. 70-80c is the ideal maximum temperature.

That's why I suggested 240mm. 120mm was reaching 61c for a single GPU, I'd imagine it'd at least be 90c for 2xGPU, though I might be wrong, I've never looked at Pro Duo temps.
240mm rad space would be ideal, because then it should be in the 50 - 65 range at the most.

Give livestreamer a shot, see how it likes it. Iirc, at 25-30c ambient temperature, on twitch the 280x would jump to 50 and 60c while viewing from flash at high and source quality settings, respectively, and dropped much more tame 42 and 47 with livestreamer + vlc.
You can even download and run the livestreamer twitch gui, if you don't want to fire them up by a run command everytime.

Yes, from 20 years to 15 years.

After 5 years you buy a new $200 card and have more performance anyway.
OEMs were putting 3 year warranty on reference 290s, clearly cards can handle the high temps for long enough to no longer be worth keeping.

Yes, 120mm (40mm thick) was reaching 61c for a single GPU.
Hence why i'm suggesting a thick (50-55mm) 140mm with a good quality fan. As long as temps stay under 90c, which they would, it's fine.

>32GB of HBM2

This card is going to be a beast.

i'll wait for star citizen release to build a new rig

>doesn't matter if it gets trashed, I'll throw it away before it dies anyway

I will never understand this logic. The card's lifespan will drop from 20 to 15 years, or 50 to 30, or 5 to 3, depending on how good the manufacturing was for that particular card. There's zero means to accurately predict how long can one item last, why not preserve what you have?

Correct. ML in general doesn't need more than fp16 (and many applications can do with even less than that), but do need high processing speed and vast amounts of memory.

Its not gcn arch based. And has trashed that for a new arch on a new fab. Who knows what the clock speed is. Could be 800mhz could be 3ghz. We don't know anything other than it'll scale down and take over Polaris sometime at the end of next year and finish off gcn for good.

TDP!=power usage
TDP is the heat energy determines cooler design. The power usage will be higher because heat is lost energy from electricity due to resistance. The 150w nvidia might draw more power than a 200w amd cards if its less efficient. You cannot relate TDP and power usage till it has been measured or the manufacturer has stated it.

holy fuck did you hit the silicone lottery or do you have a full block? no way this thing isn't under water.

GCN is not being replaced for at least 2 more generations of arch.
Vega and Polaris family dies are all 4th gen GCN.

Navi is still GCN.

As usual, Cred Forums entirely misses the key detail of something.

A high-performance FP16 part is primarily a direct attack on Nvidia's workstation and HPC market.
GP104 and GP102 have completely gimped FP16 (1/32 of FP32) performance, probably because of what happened with Maxwel 2 Titan X cards with professional customers.

Half-precision stuff will probably back trickle down into gaming eventually, but AMD is trying to create a budget alternative to GP100's jewry.

The Fury was a cut down chip. The Nano was actually an aircooled Fury X

For GPU's that's actually a valid strategy.

I user meant that there's not much difference between the fijiPRO and fijiXT. They were both rated at 250w iirc, then early furys could unlock to fury x and all the aib boards would run at similar clock speeds to the X anyway.

Years of graphics cards running at 200+ watts has given us some pretty good air cooling solutions.

They should focus on dp too. Its quite some time since a good card came out without demanding premium price by 10 times.

Are you autistic? That's not how the world works fucktard.

So it sounds like Vega 10 matches the TXp pretty closely in bandwidth and ALU power.
Does anybody here -not- expect AMD to fucking cheap out on ROPs and geometry like with Fiji and Polaris though?
Why does this fucking company always push GPU compute so fucking hard and repeatedly let themselves get raped by all the JewWorks type over tessellated garbage out in the market right now?

>buy 4Gb
>get 8GB
Dont see whats the problem there.

Can't wait to laugh at all the Nvidishit 1080 owners here.
>it's able to do 4K at almost stable 30hz on low to medium settings its totes legit a 4k card stop saying it isn't REEEEEEEEEEEEEEEE

>tfw you bought a GTX 1080 last month and it's gonna be outperformed and obsolete soon

>soon

>obsolete

If you've got $1.2k to blow, the Titan X 2.0 already smokes the 1080.
It's basically 1.5 1080s at a slightly lower core clock for 30-40% higher performance.

The Vega 10's rumored raw bw and shader specs are marginally higher than the Titan's but it will probably land somewhere between the two given AMD's history of actually delivering products' full potentials.

>In 2017 AMD will destroy everything Nvidia had to offer

Why are AMD subhumans so stupid? I think you meant to say "in 2017 AMD will destroy everything Nvidia HAD to offer in 2015.

I upgraded to a Fury for 250€ when the RX 480 launched.

I think I might get a Vega 10 card when it's not too stupidly expensive.

MORE COARS

MORE VRAM

Luckily for AMD more cores and VRAM on GPUs is good

LESS COMPUTE
LESS VRAM

>obseletion
Sounds like a new novideo rendering technology xD

Low Quality Bait

MORE TDP

every time
either amd is nvidia anti monopoly counter or nvidia has someone working on the inside there's no other way to explain the downright methodically advantageous to nvidia timing of literally everything amd does from new gpu releases to "conveniently" timed driver performance updates that always make sure nvidia can sell all they can before amd looks favorable

What did AMD lie about

My guess would be that the PCI-express connection draws a little over the 75watts that motherboards are expected to be able to supply a *minimum* of over that connection.

They lied about the Ashes of the Singularity benchmark at the initial rx 480 reveal. Or at least they were extremely misleading and accidentally got information wrong. Initially, they said that two rx 480s were equal to a gtx 1080 in AotS while at only "50% GPU utilization", without expanding on what that meant. People assumed this meant that in crossfire, two rx 480s equal a gtx 1080 when the second GPU is only at 50% utilization. Which was wrong, they later had to clarify that it was not crossfire, it was the native multi GPU support in AotS which is more efficient than crossfire, and the second GPU was being used at closer to 80%, which would put the performance of a single rx 480 much lower than what they originally lead people to believe.

If Vega 10 has only 4096 shaders and 2x HBM2 controllers but still has 15B+ transistors, what would this mean for the design?

No more jewing out on the geometry, TMUs, and ROPs maybe?

>No more jewing out on the geometry, TMUs, and ROPs maybe?
If they do that they'll stomp on Nvidia's dick even with only 4096 shaders.

AMD's design is primarily ROP and TMU starved. Their shaders are far stronger per clock than Nvidia's, but Nvidia puts out a balanced chip with better drivers.

Nvidia cheats at AOTS goy.

youtube.com/watch?v=o2b6Nncu6zY

shit iq but atleast more fps!

I literally Don't care about power consumption at all and it will likely go past that significantly if i buy one and OC it to shits

I can't view it, can you explain what the video says?

>Vega20
I have been piqued

So after Vega is released will 480's actually sell at the inteded price? 200 for 4gb and 250 for 8 gb?

nvidia renders less of the scene, and thus gets a higher score while looking worse.

Nope, Polaris is 4th gen, Vega is obviously 5th. Navi will be 6th

We'll see about the clock speeds. I'd say higher than Polaris, but not by much.

Honestly I care about TDP. Never using a card that sucks more than 200W again (currently using RX470).

You dumb fucking nigger, do you not know that Nvidia has recently settled a CLASS ACTION lawsuit over the 970?

Desktop 1060s are good cards too. I'm definitely not waiting a year for little Vega. Happy with the 470. I bet it'll last until 7nm

1080Ti will wreck the 1080. Who buys full chips anyway?

dude, it cant be true. with HBM2 it should consume less power than that.

im assuming if its 12Tflops it will consume lesss than 200watts

MORE SHADER UNITS

lets hope you are right. 64 rops and running 1500ish mhz will make it fast AF

It's AMD. They're so fucking bad when it comes to energy consumption that you need at least an NH-D15 if you don't want your computer to sound like a fucking vacuum if you use an AMD processor.

They hate each other.

frankly the power savings of HBM(2) are good but nothing astonishing.

Going from GDDR5 to HBM for a ~500GB/s card will save something like 30W of I/O power, not 50 or 100.
At this point Nvidia can get comparable net power usage with GDDR5 due to their GPUs being more miserly with power than GCN GPUs.

>would of
>of
This entire post should just be ignored.

i want a harem of bimbos that i keep in my sex dungeon for my personal pleasure.
too much bimbofication out there. not enough old style bimbo porn.

>would of
>of

>This entire post should just be ignored

MOAR WATTS

dayum when will 7nm cum out?

Gossip-fabricating curryniggers say early sampling in 2018, shit like the Vega 2 in 2019.

So don't hold your breath famalam.

Official word from the horses mouth is that ramp to risk production begins in early 2018.
Nothing WCCFtech matters. Do not give those shitty clickbaiters your page views.

Why don't you just buy what fits best with your budget for what it can do and stop being a corporate shill?

>videocardz.com

nvidiot

Because nVidia will always be the best for you budget and it makes these niggas M A D

>with shitty gddr5 memory
Might as well just get a 1080 then, at least that will run better in 4k.

> TXP is ~30% faster than a 1080
> costs twice as much
> is still a 384b card like the TX/980Ti and only 470 mm^2 GPU

given what happens when there's not competition in a given performance segment, I can't understand how the fuck some people actually want AMD to go under

HELL YEAH MOTHERFUCKER

Let's hope that as Polaris was a "front end processing" update, Vega is a back end (ROPs etc) overhaul.

GCN has been using the exact same ROP design since inception and is long overdue for an upgrade. If AMD has realized that slamming space full of ROPs ruins their power budget yet too little cripples performance, let's hope a ROP update is what we see next at a minimum.

Those were review units and the early wave batches only, review units were intentionally set up that way

AMD is trying to pull the same shit Intel did with Larrabee and just pray that everything will eventually be reliant on general purpose computation and that fixed function units will magically atrophy away.

Except AMD is in a position to pull that off. If it forces compute down studios for consoles, I don't see what anyone can do about it.

You can go a long way with screen space pixel processing, but it's not realistic to think that texture sampling, tessellation, triangle setup, etc. can be done efficiently on any sort of instruction-based processor.

Intel eventually realized they were wrong, as did Sony with Cell, etc.

isnt videocardz a pajeet site
i wouldnt trust a source named
VIDEOCARDZ DOT COM

videocardz =/= wccftech

wccftech is a kebab site, not a pajeet site.
Anandtech is pajeet.

videocardz is probably the most credible news/rumor tech site around. Pajeettech is diarrhea tier bad.

Speak english nigger

>Shiho Yoshimura, Officially The Hottest Athlete Of The Rio Olympics
Thank you Google reverse image search

who is that semen demon

>Those hips

The Fury is still pretty close to the X in terms of heat and TDP (although minde didn't let me unlock more shaders, RIP).

>Sapphire R9 Fury Nitro came with a BIOS toggle to allow for more power consumption, noise and heat
>still a lot more quiet and cooler than my old R9 290

There's a world of difference between Sapphire's 3-fan-setup and Asus' stupid 2-slightly-different-profile-fans-setup with a heatsink where most of the pipes don't even contact the GPU.

So it's basically two 480s on one die. It will perform like a 1080 but have more memory.

If there was a good machine learning library for amd I'd buy one instantly.

Amada's thighs!

something something suicide watch
something shills
something designated

24 Tflops @ 16-bit ;-)
All calculations are done in 32-bit though

something something pajeet
something rupees
something deposited

>listing fp16 ops/s to mislead normies
nice

"the world doesnt work that way!" meme

end your life
or continue living your meaningless existence
the same outcome will be achieved