Monday, July 23rd 2018

Top Three Intel 9th Generation Core Parts Detailed

Intel is giving finishing touches to its 9th generation Core processor family, which will see the introduction of an 8-core part to the company's LGA115x mainstream desktop (MSDT) platform. The company is also making certain branding changes. The Core i9 brand, which is being introduced to MSDT, symbolizes 8-core/16-thread processors. The Core i7 brand is relegated to 8-core/8-thread (more cores but fewer threads than the current Core i7 parts). The Core i5 brand is unchanged at 6-core/6-thread. The three will be based on the new 14 nm+++ "Whiskey Lake" silicon, which is yet another "Skylake" refinement, and hence one can't expect per-core IPC improvements.

Leading the pack is the Core i9-9900K. This chip is endowed with 8 cores, and HyperThreading enabling 16 threads. It features the full 16 MB of shared L3 cache available on the silicon. It also has some stellar clock speeds - 3.60 GHz nominal, with 5.00 GHz maximum Turbo Boost. You get the 5.00 GHz across 1 to 2 cores, 4.80 GHz across 4 cores, 4.70 GHz across 6 to 8 cores. Interestingly, the TDP of this chip remains unchanged from its predecessor, at 95 W. Next up, is the Core i7-9700K. This chip apparently succeeds the i7-8700K. It has 8 cores, but lacks HyperThreading.
The Core i7-9700K is an 8-core/8-thread chip clocked at 3.60 GHz, but its Turbo Boost states are a touch lower than those of the i9-9900K. You get 4.90 GHz single-core boost, 4.80 GHz 2-core, 4.70 GHz 4-core, and 4.60 GHz across 6 to 8 cores. The L3 cache amount is reduced to the 1.5 MB per core scheme reminiscent of previous-generation Core i5 chips, as opposed to 2 MB per core of the i9-9900K. You only get 12 MB of shared L3 cache.

Lastly, there's the Core i5-9600K. There's far too little changed from the current 8th generation Core i5 parts. These are still 6-core/6-thread parts. The nominal clock is the highest of the lot, at 3.70 GHz. You get 4.60 GHz 1-core boost, 4.50 GHz 2-core boost, 4.40 GHz 4-core boost, and 4.30 GHz all-core. The L3 cache amount is still 9 MB.

The three chips are backwards-compatible with existing motherboards based on the 300-series chipset with BIOS updates. Intel is expected to launch these chips towards the end of Q3-2018.
Source: Coolaler
Add your own comment

121 Comments on Top Three Intel 9th Generation Core Parts Detailed

#76
Valantar
efikkanYou are still mixing overclock with AVX and boost (non-AVX).
We still don't know which voltage this CPU will run at during boosting, so wait and see how much the actual consumption will be.
True, but we can make relatively educated guesses based on current Intel 14nm+ products. For example, GamersNexus clocked their 8086K (which is, as they say, a top-binned 8700K) at 1.3V@5GHz, which is .1-.15V lower than most 8700Ks. At 5.3GHz it was Prime95 stable at 1.45V, consuming 250W from the EPS cable (lower in lighter loads: 200W in Blender, 195 in Cinebench - CB is not AVX). This, being a top bin, is a best case scenario for 14nm+, with significant power draw reductions from the average 8700K. For reference, GN's 8700k sample needed 1.4V for 4.9GHz. There is no reason to believe 14nm++ is noticeably better than this (it's not a node shrink, after all, at best a change akin to AMD's "12nm" process from the original 14nm), and it's a safe bet to think the average high-end CPU will be comparable in voltage-frequency scaling to the average 14nm+ CPU. Also, an increase in cores and die area usually means an increase in leakage and generally more things that can go wrong, which leads to higher power voltage and power consumption at equal clocks. In other words, it's quite safe to assume that any 8-core CPU from Intel manufactured on a 14nm node based on whatever optimization of the Skylake arch they put out next is going to behave roughly like Kaby Lake and Coffee Lake in terms of thermals and power draw - and any additional cores and/or frequency will add heat and power in line with this.
efikkanAMD gaining some market share is to be expected when they go from totally sucking to having okay offerings in some segments.
Yes, and? That doesn't change how Intel stands to lose quite a lot of market share, revenue and mindshare, though. Which seemed to be what you were arguing against a moment ago?
efikkanThe only thing Intel have to fear is the massive AMD hype. Even if we assume the optimistic 15% IPC gain in Zen 2, we'll still have to wait for Zen 3 for AMD to come close to Skylake(2015) in IPC, and that's assuming Intel will do nothing in the meantime. Remember that even though Zen cut over half Intel's advantage, the improvements in Zen is mostly the "low-hanging fruit" and "reversals of mistakes" made in Bulldozer. Pushing IPC another 15% would require more effort than the improvements they did in Zen(1). AMD have promised improvements, not specifically 15%. I don't think Intel is scared if AMD plans to recycle Zen for five generations.
You've got a few things wrong here. First off, the "low-hanging fruit" quote you're referring to is from an AMD engineer (or exec? can't remember, might have been Dr. Su) talking about the future of Zen after the launch of Zen 1. Specifically, they said that they knew where the low-hanging fruit was in terms of improving the performance of the base Zen design over the coming generations. You're absolutely right that this will be difficult, but they also made it clear that the focus for Zen 1 was finalizing the design and getting it out, and they left out quite a bit of well-known optimization to ensure proper time to market. While this low-hanging fruit might not represent a massive IPC increase, it's still something. Intel hasn't released an updated architecture since 2015, as you so helpfully point out (largely caused by the never-ending 10nm delays, but they could have released a proper update if they wanted to - 2 years is enough for that). KBL and CFL are just SKL with a tuned-for-frequency production process, after all. Zero change beyond that (and adding cores).

Secondly, your phrasing makes it sound like you're saying Zen was built on Bulldozer. Zen was a clean-slate design, with no shared architecture with Bulldozer. Which, again, underscores how you've misunderstood/misremembered the whole "low-hanging fruit" thing. You don't get a 55% IPC improvement and a ~100% efficiency improvement by fixing "low-hanging fruit" in a design you've already iterated on multiple times.

Third, Ryzen's IPC is around 10% slower on average - more in a few benchmarks (particularly gaming), less in others, faster in some (like CB15 and most rendering tasks). A 15% increase (on average) would as such make it ... wait for it ... faster than Skylake. And as such also KBL and CFL and whateverLake Refresh 8-core. I really don't think this is going to happen, particularly in gaming, but I see no reason to think AMD won't close the IPC gap some with Zen 2. Zen is facing its first major update. The upcoming 8-core Intel chip is (likely) not an architecture update at all, and whatever comes after it will be the 5th architecture revision of the design. In other words: AMD will likely have a far easier time finding places to improve performance, as Intel has been at this for years.
efikkanThe bundled coolers in Ryzen 7 2700/X might be better than the crappy one Intel bundles with several i5/i7 CPUs, but are not neary good enough to properly cool these CPUs, especially not with the super-aggressive boosting done by Ryzen 2. I wish both of them dropped bundled coolers for any >$200 retail CPU. These crappy downdraft coolers don't work well at all in cases, especially when you have enough airflow to cool this and a GPU. Why not make these coolers an optional bundle instead? AMD could instantly shave >$20 off their price. It's sad how many stock coolers are thrown in the trash every year…
Remember, stock coolers aren't for OCing. And while I personally wouldn't run a 2700X on the stock cooler, at least it doesn't thermal throttle - unlike the "65W" i7-8700 with its stock cooler. Also, for those of us with custom water loops (or anyone with a faulty cooler or doing an "everything is working" test mount of a new build) stock coolers are very useful for troubleshooting or various pre-build tests. I agree that it's a shame with thousands of thrown-out coolers, but on the other hand, the savings would be negligible, and the coolers have utility for some of us. There's no way AMD is paying $20 for those coolers.
Posted on Reply
#77
CrAsHnBuRnXp
R0H1TWhy would they do that, their whole business is built around fleecing customers maximizing profits!
What business isnt?
Posted on Reply
#78
Fabio Bologna
For God's sake Intel give us some more pci-e lanes!!!!!! Even 24 would be nice to have for pro-sumer customers!!!!
Posted on Reply
#79
Valantar
Fabio BolognaFor God's sake Intel give us some more pci-e lanes!!!!!! Even 24 would be nice to have for pro-sumer customers!!!!
Yeah, both Intel and AMD need to step up their game when it comes to PCIe on mainstream platforms. Dual full-speed m.2 direct from the CPU should be the minimum going forward. Or at least allow for lane bifurcation so we can use some multi-SSD riser cards.
Posted on Reply
#80
lexluthermiester
GungarPS : i have no idea why i argue with someone that bought a FX cpu.
And yet you do. But their choice of CPU isn't supposed to have anything to do with whether or not someone is worthy of conversing with. So in reality, your statement says much more about you than it does them. Ponder that..

And just FYI, more cache does not always equal faster operation.
Posted on Reply
#81
Fabio Bologna
ValantarYeah, both Intel and AMD need to step up their game when it comes to PCIe on mainstream platforms. Dual full-speed m.2 direct from the CPU should be the minimum going forward. Or at least allow for lane bifurcation so we can use some multi-SSD riser cards.
I was thinking even more high level I mean, 10Gbps NICs, RAID cards, any sort of graphics accelerator like RED rockets or any other more esotheric expansion card like your m.2 raisers etc. take a good chunk of throughput from the PCIe subsystem... it you want a high end GPU to run at full 16x you are already out of luck!
Posted on Reply
#82
Valantar
Fabio BolognaI was thinking even more high level I mean, 10Gbps NICs, RAID cards, any sort of graphics accelerator like RED rockets or any other more esotheric expansion card like your m.2 raisers etc. take a good chunk of throughput from the PCIe subsystem... it you want a high end GPU to run at full 16x you are already out of luck!
Honestly, if you're able to afford RED's accelerator cards, you can afford (and make use of!) an HEDT system. Don't really see the issue there, unfortunately. Same goes for the RAID card if you're hooking it up to anything fast enough to saturate the chipset-to-cpu link - if you can afford that many fast SSDs, you can afford (and likely use, given the need for an SSD RAID setup) at least an entry-level HEDT PC. It might be that video work and other GPU compute requires more GPU bandwidth than gaming, but at least for gaming the performance difference between pcie x16 and x8 is imperceptible. Still, a few more lanes and some added flexibility in how they can be configured would be a great thing. And I wholeheartedly agree that 10GbE needs to become more common. The sooner the better.
Posted on Reply
#83
hat
Enthusiast
At 1920x1080, the GTX1080 suffers a whopping 4% performance loss going from full PCI-E 3.0 x16 down to 3.0 x4. The difference is even smaller at higher resolutions. PCI-E lanes by CPU never used to be a thing, and we'd usually be stuck with 8x/8x SLi/Xfire configurations without any other add-in cards, such as a sound card. Add that and we're down to 8x/4x. Yeah, it looks really bad on Intel that AMD is able to deliver a massive amount of PCI-E lanes for less money, but who is really using all those? It's like complaining your local 4 lane highway isn't sufficient for your fleet of Mack trucks, when in reality you're just another guy driving a sedan. Standard Intel desktop chips have 16 lanes, plus 20 from H370, giving you 36. Even if you run some weird configuration with 2 graphics cards, and a PCI-E m.2 SSD, and a PCI-E sound card, and a 10Gb nic, AND a RAID card, you'll be just fine if you can live with running your cards in 8x/8x, which everyone can. And that's on the mainstream platform. So even a mainstream desktop system can function as both a fast gaming rig and file server.

Who needs 10Gb ethernet anyway, outside of data centers? Do some home users enjoy moving massive files around between RAMdisks on a regular basis? 1Gb ethernet is still fine. Now 100Mb ethernet, or worse yet, wireless G :fear:is worth complaining about being slow... unless you're doing something weird, like mirroring 10TB of data to some backup box on a daily basis, 10Gb doesn't seem really useful for home users yet.
Posted on Reply
#84
trparky
The Core i7 brand is relegated to 8-core/8-thread (more cores but fewer threads than the current Core i7 parts).
What the... Really Intel? You have got to be kidding me here. I feel like I just got punched in the balls here.
Posted on Reply
#85
Metroid
After nehalen, which was the last cpu with free hyper-threading, Intel said they would charge around $100 for hyper-threading and it looks like the naming convention shows that but there is something missing (6-12 threads) in there which I assume will come later on, so my take would be. I would remove the i9 and place i8, i9 would be for hedt.

i8 9800kht - 8/16 95w
i8 9800ht 8/16, 95w
i8 9800k - 8/8, 95w
i8 9800 - 8/8, 65w
i7 9700kht - 6/12, 95
i7 9700ht - 6/12, 95w
i7 9700k - 6/6, 95w
i7 9700 - 6/6, 65w


So they would still all be unlocked by the "k" and the "ht" (hyper-threading) is included. I myself prefer cpus with hyper-threading off, ht adds 40% more heat and 25% more performance.

I guess intel does not like odd numbers. i3 i5 i7 i9. So we might never see, i2, i4 , i6, i8 or i10 hehe maybe ix?
Posted on Reply
#86
GoldenX
Where is my 1 core 2 threads, virtualization and AVX locked "i1"?
Posted on Reply
#87
Melvis
Im very surprised at those clock speeds, thats alot higher then I was expecting considering how hard it is to keep the 8700/K at full all core boost speed without thermal throttling. Thats a whole 300MHz higher on all cores over the 7820X. Your going to have to spend $100-$200 ontop of the retail price for a good air/water cooler to keep these 8core CPU's from thermal throttling and at 95W TDP I find that hard to believe. Time will tell I guess how these really fair in the wild, yes there going to fast thats for sure but at what cost.
Posted on Reply
#88
Metroid
GoldenXWhere is my 1 core 2 threads, virtualization and AVX locked "i1"?
That is how it was supposed to be hehe, and the core clocked at 30ghz.
MelvisIm very surprised at those clock speeds, thats alot higher then I was expecting considering how hard it is to keep the 8700/K at full all core boost speed without thermal throttling. Thats a whole 300MHz higher on all cores over the 7820X. Your going to have to spend $100-$200 ontop of the retail price for a good air/water cooler to keep these 8core CPU's from thermal throttling and at 95W TDP I find that hard to believe. Time will tell I guess how these really fair in the wild, yes there going to fast thats for sure but at what cost.
We will have to wait for reviews for deeper findings, right now the 9700k is the way to go, 8/8, but if it costs more than $300 then is a step back. I myself think i5 8400 is the best performance/cost cpu out there. I'm hoping Intel will launch a 8/8 cpu turbo boost to 4.0 to all cores, price range $229, something similar to a i5 8400 but with 8/8.
Posted on Reply
#89
Valantar
hatWho needs 10Gb ethernet anyway, outside of data centers? Do some home users enjoy moving massive files around between RAMdisks on a regular basis? 1Gb ethernet is still fine. Now 100Mb ethernet, or worse yet, wireless G :fear:is worth complaining about being slow... unless you're doing something weird, like mirroring 10TB of data to some backup box on a daily basis, 10Gb doesn't seem really useful for home users yet.
You have clearly never tried editing a photo in Lightroom (or even just going through a catalog of photos) off network storage, then. GbE is woefully insufficient for anything like that - let alone anything video-related. I only have SSDs in my main computer, with all mass storage in a NAS. Really don't want any 3.5" drives in my main PC (and don't even mention noisy 2.5" drives). Lately, I've had to resort to filling up my system drive with freshly imported photos to be able to process them in a timely manner, as I had to give up on editing off the NAS. The performance is simply not sufficient. The workaround is passable, for now, but definitely not a long-term solution.
Posted on Reply
#90
hat
Enthusiast
You are right, I never imagined such a workload. Though if I were working with large data sets all the time, the first thing I'd consider would be a sufficiently large HDD, or even 2 in RAID 0, if it's throwaway data generated temporarily during the working process and the finished product is stored elsewhere. Relying on a really fast intranet link for such data still seems a bit odd to me, but to each their own... :toast:
Posted on Reply
#91
Tsukiyomi91
in short, those who are complaining that Intel & AMD aren't "giving enough PCIe lanes" on mainstream platforms... how about splurging on their HEDT processors & EATX boards if you think a mere mainstream CPU + Chipset combined PCIe lanes is "not enough" for your whatever overly-expensive, non-existent setup? Oh I forgot... the folks who complain too much here don't even have the money to even get an AMD EPYC or high end Threadripper for their "gaming" or "content-creating" PCs...
Posted on Reply
#92
Vya Domus
GoldenXWhere is my 1 core 2 threads, virtualization and AVX locked "i1"?
And on 10nm with no iGPU because of yields.
Posted on Reply
#93
Tsukiyomi91
anyways, all that aside, seeing a mainstream 8 core, 16 thread chip from Intel is something. Boost clocks on all 8 cores is interesting too.
Posted on Reply
#94
Gungar
lexluthermiesterAnd yet you do. But their choice of CPU isn't supposed to have anything to do with whether or not someone is worthy of conversing with. So in reality, your statement says much more about you than it does them. Ponder that..

And just FYI, more cache does not always equal faster operation.
It does say shit about me because you didn't even understood what i wrote. The FX series isn't a poor man cpu, it's just a bad cpu except for very specific applications. And when someone comes and say "cpu is redundant" when he clearly doesn't understand shit, i get angry.

And yes more cache does not always equal faster operation, having more than a dual core too. Do you still buy a dual core over a 8 cores because of that?
Posted on Reply
#95
hat
Enthusiast
Tsukiyomi91in short, those who are complaining that Intel & AMD aren't "giving enough PCIe lanes" on mainstream platforms... how about splurging on their HEDT processors & EATX boards if you think a mere mainstream CPU + Chipset combined PCIe lanes is "not enough" for your whatever overly-expensive, non-existent setup? Oh I forgot... the folks who complain too much here don't even have the money to even get an AMD EPYC or high end Threadripper for their "gaming" or "content-creating" PCs...
Well, saying they don't have the cash to afford it I think is a little unfair. I think the PCI-E lanes are just one more cash grab tactic from Intel. They've been doing other such things to increase their profits in AMD's absence as a viable competitor, such as thermal paste rather than solder, and locking down overclocking on all but K series CPUs. I think the PCI-E lanes are just another part of it. While most people can get away with a Celeron and an entry level H310 board as far as PCI-E lanes goes, the fact that AMD has way more PCI-E lanes available across their whole platform for less money, and the fact that Intel forces you to spend lots of money if you really do need all those lanes just points to another cash grab to me. If AMD can keep it up, I wouldn't be surprised to see overclocking available on all products again, aside from OEM boards (Dell and the like), soldered chips, more PCI-E lanes and whatever other goodness that I suspect Intel has been deliberately manipulating to get bigger profits while AMD was taking a nap.

I kinda veered off a little there on what I was trying to get at. In short, my point is that just because lots of PCI-E lanes are not really necessary for most doesn't mean it shouldn't be available. AMD has proven it can be done without some huge increase in cost, as we have seen from Intel. They were getting too comfortable completely dominating the market and coming up with more ways to increase profits further by deliberately cutting out this and that and only making it available on more expensive products.
Posted on Reply
#96
Valantar
Tsukiyomi91in short, those who are complaining that Intel & AMD aren't "giving enough PCIe lanes" on mainstream platforms... how about splurging on their HEDT processors & EATX boards if you think a mere mainstream CPU + Chipset combined PCIe lanes is "not enough" for your whatever overly-expensive, non-existent setup? Oh I forgot... the folks who complain too much here don't even have the money to even get an AMD EPYC or high end Threadripper for their "gaming" or "content-creating" PCs...
This is exactly why we went with TR when I built my GF's video editing rig a while back - it's getting a 10GBe NIC as soon as switches at "reasonable" price points are available (I'd like =<$150 with >2 10GbE ports, as I'd need at least 2 $100 NICs as well, which makes the whole package quite pricy). It'll also probably get a couple more NVMe SSDs over the years. A rig like that needs fast storage and connectivity, and lots of PCIe, and even though both the motherboard and CPU could have been half the price if we went for Ryzen (though with 4 cores less), it would have necessitated compromises and significant upgrade in too short a time.

Still, I would love the option of a >GbE connection on my gaming/photo editing rig too. Hopefully an AM4 ITX board shows up in the next couple of years with 10(or at least 5)GbE built in. For now, I'll have to make do with some sort of local SSD caching scheme for photos I'm working on (with everything else on the NAS) - which isn't exactly cheap either. I'm currently working through the ~70GB of photos I came back with from travelling this summer (only ~1700 24MP photos, but uncompressed RAW files are not small), and it's really putting the hurt on my system SSD. An extra NVMe slot or 10GbE would be very nice to have - and I have zero need for an HEDT CPU (that would be quite detrimental to what I use the PC for, in fact).

Also: mainstream PCs have had ~16 PCIe lanes for quite a few years. AMD added 4. Intel has a bunch through their chipset, which is both good (plenty of potential devices connected) and bad (bottlenecked if using for example an SSD and a 10GbE NIC at the same time). On the other hand, NVMe storage has massively increased the potential use for PCIe in relatively mainstream PCs (especially for those who can't afford large SSDs, who can no longer buy additional ones as storage needs grow). We're getting to the point where a lot of people might be considering getting their second NVMe drive - especially now that prices are coming down. Is asking for a bit more really too much even for relatively inexpensive mainstream components? Intel increasing the QPI link between CPU and chipset from x4 to x8 bandwidth shouldn't be that challenging or expensive, no? Or just adding a teeny-tiny microcode update to allow for finer-grained lane bifurcation? I'd gladly go for splitting the CPU x16 slot into x8+x4+x4 (GPU, NIC, SSD) or similar (maybe x8 GPU + x8 dual-SSD riser card?).
hatYou are right, I never imagined such a workload. Though if I were working with large data sets all the time, the first thing I'd consider would be a sufficiently large HDD, or even 2 in RAID 0, if it's throwaway data generated temporarily during the working process and the finished product is stored elsewhere. Relying on a really fast intranet link for such data still seems a bit odd to me, but to each their own... :toast:
Photos are pretty much the definition of not throwaway data ;) Even though Lightroom stores edits separately from base files in a catalog file, I'd argue that the amount of work put into such a catalog makes that very much not throwaway too, even if the photos are safely stored elsewhere.

As for working off an HDD or HDD raid, response times (and transfer speeds for non-RAID) make that barely better than working over a GbE link. Worse if the data coming over Ethernet is off an SSD, as response times will be better even if transfer speeds are roughly the same. As I said above, I'll probably end up getting a bigger SSD for local "caching" of stuff I'm working on, but I'll have to look into ways of at least somewhat automating the caching as transferring back and forth over the network manually is a chore. Not to mention avoiding duplicate versions of files, making sure edits are synced across devices, and all that jazz.

Personally, I like to specialize PC builds somewhat based on their use. I don't see any reason for my desktop PC (which really doesn't need multiple TB of storage given that it's mostly used for office work and gaming (with frequent if not constant photo work - it's a hobby, after all)) to have large, noisy and hot HDDs. It's built to balance size, noise and power, and HDDs don't fit well into that balance. Conversely, we need enough backup storage (given that we're both hobby photographers and my GF does film production) that a NAS is a must anyway, which would make storing everything locally quite redundant. The NAS doubles as an HTPC for now, although I'm planning on separating that out so that the NAS doesn't have to live next to the TV. The NAS is also configured for power efficiency, 24/7 operation, and set up for remote access when we're travelling, which would be quite silly for my ~90W-at-idle water cooled desktop (not to mention the Threadripper workstation :eek:). A pump failure or coolant leak when I'm not coming home for two weeks is not something I want to experience.

This type of thinking (and these hobbies/this work, I suppose) is how you end up with 5-6 computers in a 2-person household, btw :D
Posted on Reply
#97
hat
Enthusiast
I don't know much about the type of work you do... but I assumed there would be some sort of temporary working data created while working with the photos in the editing program. I know MeGUI (at least sometimes depending on the settings) does this for transcoding video... it creates temporary working data while transcoding is in progress; this data is no longer necessary and is deleted once the job is done. I'm sure you know what your process requires far better than I do, though... the only thing I can really disagree with you on is "noisy and hot" HDDs. I haven't had a HDD that was noisy or hot since way back when I had a socket 478 rig... listening to the HDD grind as I was loading a BF1942 map... they just don't make noise like that anymore like they used to. The only time I hear my drive, which is an aging 500GB WD Black, is when Windows has the drive in "sleep" and then I access the drive. I can hear it start up from sleep, but beyond that I hear nothing from it.
Posted on Reply
#98
Valantar
hatI don't know much about the type of work you do... but I assumed there would be some sort of temporary working data created while working with the photos in the editing program. I know MeGUI (at least sometimes depending on the settings) does this for transcoding video... it creates temporary working data while transcoding is in progress; this data is no longer necessary and is deleted once the job is done. I'm sure you know what your process requires far better than I do, though... the only thing I can really disagree with you on is "noisy and hot" HDDs. I haven't had a HDD that was noisy or hot since way back when I had a socket 478 rig... listening to the HDD grind as I was loading a BF1942 map... they just don't make noise like that anymore like they used to. The only time I hear my drive, which is an aging 500GB WD Black, is when Windows has the drive in "sleep" and then I access the drive. I can hear it start up from sleep, but beyond that I hear nothing from it.
The library (Ligthroom's catalog of file names/locations and the edits/adjustments done to them) has to be stored locally, but it doesn't help in terms of image preview loading speeds. The main problem isn't in editing (which takes a bit of time per image, so loading times are less of an issue), but the initial library management phase - after importing the photos when you look through them to rate and select your favourites. This requires frequent loading of new pictures, and zooming in to check focus, sharpness and so on. When each RAW file is ~40-50MB and needs a decent amount of processing power to decode, that's quite demanding, and slow transfer speeds exacerbate lag significantly. Ligthroom doesn't generate full-size previews of photos until you zoom in (before that it relies on pre-generated previews, and doesn't do any type of full-size pre-fetch at all, at least that I can tell), which makes this very crucial (yet normally very quick) action laggy and annoying. If you're going through 3-400-500 (or more) photos, that second or two to load the full-resolution zoom adds up to a pretty bad experience. An option for better performance is generating what LR calls "Smart Previews" (don't quite understand what these do, but I understand them as a sort of proxy file, allowing for editing even when remote storage is offline), but using these also affect preview quality negatively and requires a very significant amount of processing time upon import.

As for noisy and hot drives, I know I'm picky, but I also have a bad track record with HDDs in my main systems. Luckily my NAS is both relatively quiet and stable, but I've had far too many failed HDDs and clicky, whiny ones (not to mention ones that cause vibration noises no matter the amount of rubber mounting hardware used) to keep them around in my main rig. Since my PC is around 50cm from my head at all times (off the floor to keep dust out, on a shelving unit next to my desk), silence is crucial. Currently, when not loading the GPU I can barely tell that it's on at all. That's how I want it. Any HDD would be audible above this.
Posted on Reply
#99
Tsukiyomi91
@Valantar well, seeing native 10GbE & having more native M.2 slots on the board to really take advantage of the CPU & chipset PCIe lanes would be something cool to look out for. Hopefully there's that to cater our needs later down the road.
Posted on Reply
#100
Valantar
Tsukiyomi91@Valantar well, seeing native 10GbE & having more native M.2 slots on the board to really take advantage of the CPU & chipset PCIe lanes would be something cool to look out for. Hopefully there's that to cater our needs later down the road.
Yeah, that would be sweet. I'm not really very hopeful for 10GbE on ITX, though, due to the "niche within a niche" nature of this and the board space needed for the controller (and cooling it). Still, hopefully it'll show up at some point. Maybe someone can manage to stick the controller and a thin heatsink on the rear of the motherboard? More m.2 is possible though, as Asus has shown with their newest ITX boards. Vertical m.2 at the edges of the board would also be a decent solution.
Posted on Reply
Add your own comment
Apr 24th, 2024 09:40 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts