Its early 2018, memory and GPU prices are out of control, Intel has been on a 14nm node for four years, and AMD has made a grand total of two new GPU models in the same length of time (aside integrated). And my FrankenPC just died (board won’t even power on, but it might be under RMA still).
The worst part is that nothing on the market right now is compelling. An 8700k is 40% faster than a 4770k – except my 4770k is clocked 30% faster than stock right now, and can actually do 4.7ghz at 1.25v but I don’t think .2ghz is worth .06 volts considering the thermal impact. Its also a cheap used motherboard, I don’t want to tax the VRM too much. Sure, you can OC that 8700k to probably 5ghz, and 40% is nice, but I grew up in an era where a 4 year gap meant a 4 times performance improvement, not a 40% one. Even compelling Ryzen chips only match a 4770k in single core performance and cannot clock nearly as high.
So memory prices are supposed to remain out of control for the next two years. No new production until early 2020 pretty much. Hopefully China can inflict some damage on Samsung in that time by spinning up its own dram competitors, but hoping for something good out of an autocratic state is probably not the best strategy.
DDR5 is coming though. The spec isn’t even out there yet – considering it took 7 years for DDR4 to come to market in bulk that isn’t too promising. But JEDEC promises availability in 2019 and widespread adoption in 2020, which would mean Sapphire Rapids and Ryzen 3 should support it. If DDR4 was anything to go by, 2021 will be the year to build DDR5 systems. 4.8 ghz memory as the standard sounds a bit optimistic, though, especially if JEDEC wants to push the voltage standard even lower to the ~1.1v range. Memory speeds are not generally essential to CPU performance, and higher frequencies have a diminishing effect. Memory has only gotten about 33% faster in the last 3 generations on average kits. Just some casual appraisals of RAM just in my possession and knowledge about what good DDR4 does puts DDR2 at a .0047 sec access time, DDR3 at .0045, and DDR4 at… .0044. Bandwidth increases, but that is a function of bus sizes moreso than frequency, and its generally hard to do a lot of sustained bandwidth pressure with memory.
On the flip side, “7nm” is coming. More appropriately 10nm since Intel is about the only company being honest about their feature sizes. Its still a die shrink, not as significant as 22nm to 14nm, but its still a good 29% size reduction on average, and all those psuedo-7nm nodes can have anything from worse to better results. The 14nm shrink didn’t have a profound impact on CPUs – they are still having issues with small features overheating too much to be able to dramatically improve the per-core performance without having to redesign their chips for better thermal performance.
The problem there is that these are assembly line parts – the same die is binned from the highest to lowest end chips and the manufacturing process is so multi-staged that trying to build a thermal optimized high end chip would drive up the wafer costs of the cheap budget chips enough to offend some big buyers. So they don’t do it. Hell, Intel doesn’t even solder the chips anymore because of it, but when aftermarket users delid and use liquid metal themselves the results can sometimes be double digit thermal improvement.
That being said, nothing is primed to change any time soon. Ryzen had solder because AMD needed it – their chips already OC poorly, and they didn’t really have any budget AM4 chips on the first generation Ryzen wafers since they weren’t including GPUs. I expect them to keep soldering chips to try to subsidize the performance gap, while Intel might use solder in a future iteration of chip, but since even the 8700k doesn’t use it and that whole CPU line was released basically as an “oh shit Ryzen is way too competitive” face heel turn either they didn’t have time to swap out the manufacturing process or they just don’t care.
In more general terms there are two major considerations for future CPU purchases – mitigations of spectre and meltdown, and the ability to disable the IME / PSP. Since I’m already predicating future purchases on a die shrink, the first fully mitigated chips should be released in 2019 anyway. Its between Ice Lake and Ryzen 3, both of which should have PCI-E 4, and with preference for AMD since they are the underdog, we’ll have to see how those chips match up against one another.
PCI-E 4 is a huge deal. Right now consumer Intel and AMD chipsets are bottlenecked hard by PCI-E lanes because both current platforms support 24 that are then split between not just the IO and GPU but also NVME SSDs. Doubling bandwidth means current GPUs can run fine on 8x PCI-E 4 slots and current NVM-E drives can run at 2x, meaning there are substantially more lanes around for the various bus devices in the system. If either Intel or AMD fail to adopt PCI-E 4 where the other succeeds, that is functionally a deal breaker.
Whats really interesting about PCI-E lanes is that most GPUs don’t have perceptible performance hits running in 8x right now, so next gen midrange cards could feasibly run at 4x just fine.
To wrap up CPUs, the hope is that renewed competition will drive Intel to use solder, adopt larger caches, maybe even make an IGPU-free mainstream part. 8 core / 16 threads would be very nice. 10 core 20 thread would be insane. On the AMD side, platform limitations will probably stop them from pushing a 16 core AM4 part – Threadripper et al only work by mangling two dies together to begin with, and AMD has demonstrated in multiple previous architectures an inability to really scale up cores. And since Ryzen clusters are 4 core chips, they almost certainly cannot fit another one into an AM4 die or support it in chipsets. Of course chipset support could be somewhat mitigated by releasing a new chipset iteration that only supports AM4 12 core CPUs. If we saw a 12 core Ryzen 3 competing with an 8 core Ice Lake it would be a very nice competition. If its 6 core Ice Lake against 8 core Ryzen 3 thats a less interesting competition unless AMD can really pull some IPC out of its die shrink.
CPUs are boring affairs – they have been fairly stagnant for generations, and the latest upset has just been renewed competition forcing already known capabilities into the mainstream. Unlike with GPUs where every vector of improvement is substantial and tangible, trying to push 16 or more consumer desktop CPUs is a much less effective scaling method, which AMD is currently stuck reeling from. Today they face pressure on both ends of the market by a $300 8700k competing with their 1800x with a substantial IPC lead that makes them neck and neck despite the 2 core difference in threaded workloads. The low end is also a problem – AMD has to pit 6 core chips against 4 core Intel parts that trounce them in IPC and overclocking headroom, where again the single threaded gap means 4 cores can compete with 6. IPC is then king on the CPU, and all meaningful improvement ends up going there.
GPUs are different. Nvidia demonstrated with their 1000 series that substantial frequency jumps – a 1080 has not just 20% more shader units, its running each of them up to 45% faster. The 1080 nearly doubled the performance of the 980 while only costing 15w more TDP doing it. GPU die shrinks continue to mean tremendous advances. AMD has been deeply troubled by its 14nm transition – the best they could put out a year and a half late to the party (two if you consider aftermarket card releases, and never if you consider the crypto hell that ruined GPU prices) was to improve IPC and raise clocks by 20% while pushing the Vega 64 well over tolerable PCI-E power limits to beat the Fury X by only 40%. We have not seen anything close to what Nvidia has managed from AMD – Nvidia got its increased clocks effectively for free at 14nm, while AMD paid for them with substantial power draw.
GPUs are also different in how I simply will not buy an Nvidia product. I have remarked all over the Internet, as have others, how insanely evil Nvidia is. They are Microsoft grade awful in how they conduct business and distort the industries they operate in. So I’m not so much interested in the best GPU in 2019 or 2020 as I am in hoping AMD can get its act together and capitalize on another chance to produce something good from a die shrink.
Probably the most important thing AMD can do now is to ignore Nvidia. Make the best GPU with their IP they can at the 7nm node with a 400mm die and then make skews at 300mm and 200mm for their other product series. Keep that 400mm card at 250w TDP, even if they have to underclock it to do so. And sell it like a 400mm card – cryptocurrencies can do at least one good thing for AMD, and that is let them sell cards for substantial margins competitive with what Nvidia is doing. $600 for a 314mm die is only outrageous if someone can produce a competitive product for less, which AMD has a hard time doing with a mammoth 486mm die.
Their 300mm should target 200w TDP and be their midrange option that can finally fill the void they keep missing against Nvidia – AMD consistently makes massive die cards like Hawii, Fiji, and Vega or smaller 200mm die cards like Polaris, It hasn’t been since Tonga that they made a ~300mm die of worth, and even Tonga was disappointing against the older 7970 die. The 7970 era was a golden age for AMD while Nvidia was behind, its a shame they have lost that segment.
And finally they can have a 200mm 150w card that fills in their bottom card tiers. With modern APU quality demonstrated by Raven Ridge I just don’t see much reason to design and manufacture new GPUs smaller than that – just shrinking the same old older dies makes sense, or even better just keep making those. The market for anything under a 570 quality GPU is exclusively for computers absent graphics at all anyway.
If AMD can manage to discover the Nvidia “trick” to extremely high clocks despite high core counts in their 7nm iteration of GCN and if they keep their power profiles and die sizes under control it really doesn’t matter what Nvidia puts out, they still have all those Freesync monitors on their side Nvidia refuses to support and the mining scene that will now never truly die going forward to depend on. Much like how the 28nm GPU era ushered in “GHZ edition” GPUs 7nm being the era of common 2ghz 2000+ core GPUs would be a great time to migrate.
Nvidia got 80% more transistor density in their 28nm -> 14nm shrink. They got over double going from 40 to 28nm between the 500 and 600 series. AMD got about 90% scaling going from the 6000 to 7000 series and got a similar 80% transistor bump going to 14nm. Each die shrink meant dramatically smaller dies for similar performance or way more powerful large chips in ways CPUs just don’t track any more. So waiting out Navi will definitely be worth the wait no matter what, lets just hope that almost 4 billion more transistors doesn’t just become 20% more TDP draw for 30% more clocks and 20% more IPC.
This anticipated jump to 7nm on GPUs seems really fast too. It took four years to go from 28nm to 14nm, But now 7nm is supposed to land in something between three and four. It also kind of has to be closer to three, because neither AMD nor Nvidia have any new consumer series lined up this year at all.
Probably the most boring subject, NAND memory is hot, prices are going up since they come out of the same fabs as GPUs and memory, and unlike with SATA SSD growth the competitive market for NVME drives seems much more muted. There should be some constant uptick pressure to push NVME disks up in size, up in linear throughput, up in iops, and down in price, but that does not seem to be happening. The Samsung 950s are barely worse than the 960s and cost the same. A 500GB sata SSD today hits low prices that were found four years ago. NVMEs have gotten cheaper, but I don’t think they can go much lower since they demand a price premium for their controllers above SATA ssds that just don’t get cheaper anymore. Hell, I got my MX300 750GB for $135 in 2015 –
As 7/10nm stuff starts rolling out I’ll start adopting. The preferred experience would be a $350 12 core Ryzen 2 CPU that can actually hit 4.5 – 5ghz at 1.3-1.4v, if ram prices get way better by then 32GB as 4×8 of 3200mhz cas 14 1.5v otherwise 2×8 because swap is fine but ram costing over twice as much as it did years ago is not. Sadly I won’t get in early in the ddr5 generation but at the end of ddr4, but die shrinks are more relevant than memory standards, albeit the ddr5 chipset from amd might prompt them to go up to 16 core on the consumer platform.
I do what to build this system in Dads Obsdian 450d – I’ve learned from experience that while Mini ITX is space efficient and all, the board quality really suffers and you come up short on connectivity no matter what you get. I already some hyperborea fans to run in the front, and can run 2x 140mm fans in constant pwm mode on the top. Throw in either the Mugen Max if it fits or get one of those twin tower coolers (or even better just an h7 or something if Ryzen 2 is like Ryzen 1 and has very limited OC headroom). I’d be interested in trying a MasterLiquid 240 radiator or some such but I won’t have anywhere to put it – the 450d’s top fan grill can’t support a radiator. I guess I could put it in the front, but I have so many lovely 140mm fans to use already.
The graphics card is the real unknown quantity, but it really depends on if AMD can, A., not drop the ball again on Navi, and B. deliver working support in Linux in a timely manner. Now that DC is mainline and by late next year should be incredibly stable it shouldn’t be a problem to add new hardware enablement by then, but we will have to see what happens. Additionally, if the Navi line is mediocre on the top end like Vega is now I’d probably skip the higher end card (if one even exists – AMD might pull another 400 series and just do a midrange card I’d be stuck with) but if they can fix their architectural mistakes and get back to 6970 / 7970 competitive quality I’ll be all over a higher end card if it isn’t pulling 300w.
Price becomes a concern, though. The most recent mining trend is starting to wane, so I imagine GPU prices will get back to MSRP this summer as the glut of mining hardware drains out. It will then probably ramp up again in late 2018 / early 2019 – this is going to be a cyclical thing as people get dollar signs in their eyes and pump and dump GPUs for mining any random crypocurrency that sees a price spike that makes mining worth it.
There is another possibility that still might unfold, though. Crpytocurrencies aren’t big fans of stable equilibriums, and this latest fervor is definitely a bubble, but the possibility exists there is enough free money around and enough general inflow from the muggle public to keep stimulating crypto enough to keep GPU mining in demand perpetually. Which would make the best mining cards (cough, AMD) never return to MSRP because they literally print money.
So how much does this thing cost? Lets see the optimistic and pessimistic outlook. If AMD stays the 8 core trajectory the midrange 8 core of 2019 will probably be in the $250 range since its outclassed by Intels high end. If it overclocks well I’d get it anyway. Unless AMD does have meaningful IPC improvements where then it might cost $350, but that would also probably be worth it. If we see a magical 12 core AM4 chip that would probably run $450, where AMD will try to push a price premium but will also recognize the market for such a chip is tiny – its nothing close to a workstation or server platform, even with PCI-E the lane count will simply limit it, so they can’t inflate the prices too much. If they do that though a 10 core 20 thread chip would probably turn up that is very price competitive and that I would strongly consider.
If 7nm turns out to be a disaster and Intel ships a strong 10nm process, and if there is still no mitigation for the PSP while ME cleaner still works on the Ice Lake chipset, then I might be stuck going with Intel again despite my preference for AMD. Either way, the CPU will run $250 – 350 unless Intel makes a compelling 8 core or AMD a compelling 12 core in the $400 – $450 range. Split the CPU price down the middle and call it $300.
The CPU cooler is also in consideration. If I succumb to peer pressure and get a 240mm liquid cooler it will only be around $65. Good twin tower coolers can come in at $60 so thats also an option. But I also have this Mugen Max – if it fits, I’d consider using that. Or an h7. Who knows, depends on overclockability.
Memory is either going to get better or not, in the most optimistic case with full cost recovery I’d get 32GB at $60 per 16 for $120, but more likely I’ll be paying around $160 for 16GB and waiting for price drops that never come.
I’m going to be looking for an ATX board from anyone but ASUS. They are on my naughty list for having two dead boards and a busted HDMI monitor to their name from my possessions. I’ve had an alright experience with asrock, a positive one with MSI, and a neutral one so far with Gigabyte albeit I’m really surprised a b85 board can push a 4.7ghz OC stable on a 4770k. Either way, I want at least a 6 phase CPU power system, preferrably 8, but quality is better than quantity especially on consumer chipsets and I definitely don’t need 10+ phases. Gigabytes X370 Gaming 5 is a great reference point. Two m.2 slots are probably optimal, assuming the chipset can handle such data rates. PCI slots are basically meaningless since I won’t be running multiple GPUs. The 450d does support 5 hard drives (which is what I currently have) plus 2 5.25″ bays I can throw some random dvd drives in. So 7 sata ports minimum, but considering I would be really interested in getting a 3.5″ hot swap bay to test hard drives in the front I’d probably want 10 total. Depends on the chipset in large part – not a big fan of third party SATA controllers, so if the chipset can only do 8 SATA I’d stick with that.
Anyway, a reasonable $150 board that doesn’t have all the stupid features but can at least have post codes would be great. On the rear IO two ethernet would be really nice, usb-c is basically a required future proofing, and I do want a ps/2 port for my keyboard that I don’t see myself getting rid of despite a desire to get some dedicated media keys – I could realistically just keybind something to those functions anyway if I cared, and this board has held up perfectly well for four years now.
For storage I’ll just look for a deal on a higher end 500GB nvme drive. I don’t see the prices getting competitive enough to make 1TB a reasonable option, especially since 500GB has been trending down in lockstep with 1TB on nvme so far anyway. Sata drives have gotten to the point where $250 1TB disks are happening, especially with the 850 Evo and MX500. I’d like two nvme slots anyway for future expansion (I’m really enjoying my 850 Evo raid 1 over here despite the fact I can’t really perceive that performance anyway). I’m going to have to partition some disk with a damn efi partition still. What a bummer. UEFI is absolute cancer. If any upcoming platform could support coreboot that would be really nice so I can just payload my kernel from a btrfs nvme disk.
Last but definitely not least… in price… will be that GPU. The optimistic approach is that AMD’s engineers get their act in order and stop trying to dick measure Nvidia on the high end. They just don’t have the budget to do what Nvidia does with a half dozen hyper optimized GPU skews in the consumer segment alone. If they get it right, and we get a rock solid 250w Navi that can deliver 150% performance over the 150% performance over Hawaii we are due, I’ll be all over that up to $500. If the midrange card can pull an 80% improvement over Polaris that would also be a strong contender around $250.
But those are optimistic. If we trace history, big Navi will not only be late and well beyond my buying window, it will be a 300w disaster that is way expensive and delivers mediocre performance while acting as a space heater. That ones right out, and midrange Navi will probably pull a Polaris and just match Vega at 10 billion transistors despite having the potential for huge clock improvements at the same densities if designed right like the Geforce 10 series and then all those cards will be bought out by miners anyway.
Well I guess miners could buy them – I’d probably try to snag one day one at MSRP if it were even remotely usable. Still, that would be $250. Lets say the $450 high end step down can pull a Vega 56 2 and just deliver on the performance promise, and then were talking.
So the low end estimates would be $120 for 16GB of magic dream memory, a $250 8 core, a $140 board, $180 for an m.2 ssd, and a $250 gpu at release, with a reused cooler, case, fans, and PSU. Thats around $940 for a big hefty system. On the other end of the spectrum we have a magical $400 12 core, 32GB of not so dream memory (around $170 a kit), a bit more expensive for a good m.2 multislot board around $180, an AIO water cooler at $65, a TB nvme drive if prices get reasonable around $300, that $450 GPU, and otherwise reused parts. Thats $1735.
None of that is too bad. I guess. Going to need to make some side money between now and then to justify an $1800 build, though. Lets hope Zen 2 and Navi can deliver…