Go Back   Hardware Canucks > HARDWARE > CPU's and Motherboards

    
Reply
 
LinkBack Thread Tools Display Modes
  #31 (permalink)  
Old August 11, 2017, 09:56 AM
Groove's Avatar
MVP
 
Join Date: Jan 2009
Location: Ottawa(ish)
Posts: 432

My System Specs

Default

Quote:
Originally Posted by EmptyMellon View Post
Plus, once the hype subsides, so will the prices. And I would not be surprised to see AMD bring down the TR prices as Intel releases the rest of their Skylake-X CPUs.
Yes, especially given the fact that Ryzen 7 has seen some pretty good sales in the last few weeks... I wouldn't be surprised to see similar sales for TR as well...
Reply With Quote
  #32 (permalink)  
Old August 11, 2017, 10:36 AM
ipaine's Avatar
Hall Of Fame
F@H
 
Join Date: Apr 2008
Location: Edmonton, AB
Posts: 2,799

My System Specs

Default

Quote:
Originally Posted by EmptyMellon View Post
After reading this review, I am looking forward to 31st of August for the 1900X review. After that it will be a good time to decide between X299 (i7-7820X?) or X399 (TR4 1900X?), or even AM4 (1800X?) platform if it makes sense. Now that the Threadripper reviews are out on the 'inter-webs', could the board makers please stop making/marketing any more "Gaming" TR motherboards, it is that much more silly since the reviews point to prosumer/content creator/home lab type workloads and less to gaming.
This^^

I understand that if I had just a single card the 7700k is most likely the best cpu for gaming right now, but what I want to know is what about dual cards? I guess I want to see what kind of a difference are we going to see with the 1900X. Something like 7700k with dual 1080ti's and a nvme drive vs a 1900X with dual 1080ti's and a nvme drive.

While I am not certain that I will be building a new system this year, I am certainly considering it and really if it wasn't for the pci-e lanes I would be sticking with Intel almost definitely, but I want to be able to run dual cards and have at least one nvme drive not getting neutered.
__________________
"Nothing sucks more than that moment during an argument when you realize you're wrong."

https://www.evga.com/associates/defa...CUV5OY6FY0U02O
Reply With Quote
  #33 (permalink)  
Old August 12, 2017, 07:34 AM
AkG's Avatar
AkG AkG is offline
Hardware Canucks Reviewer
 
Join Date: Oct 2007
Posts: 5,268
Default

Personally single thread performance is an over-rated metric (not 'can it play crysis' over-rated... but over-rated). NO modern OS only uses one thread... and if you are buying a workstation (AKA HEDT in Intel parlance)... it doesnt matter as you are doing multiple things at once, so the OS can hog a couple threads (or even a dozen) and you wont notice its overhead. Same cant be said of 6 or even 8 cores CPUs.

That is the beauty of these next gen CPUs as you dont have to futz with the OS as much to reduce its pig slow overhead issues to get back 'lost' performance at WORK tasks.

Personally I cant wait to see TR 1950X vs Core i9-7960X. 3.4 baseclock vs 2.8 baseclock should be VERY interesting come October'ish - as that is the speed the cores will be running at when used properly. Until then I can see a lot of professionals not waiting for Intel.

IMHO right now AMDs building block design may not be perfect (namely I.F. bottleneck) but it looks like AMD is ahead of the curve and can scale up faster and easier than Intel can. Only time will tell if this equates to better models. Probably better value if nothing else.

YMMV
__________________
"If you ever start taking things too seriously, just remember that we are talking monkeys on an organic spaceship flying through the universe." -JR

if your opponent has a conscience, then follow Gandhi. But if you enemy has no conscience, like Hitler, then follow Bonhoeffer. - Dr. MLK jr
Reply With Quote
  #34 (permalink)  
Old August 12, 2017, 08:12 AM
MARSTG's Avatar
Hall Of Fame
F@H
 
Join Date: Apr 2011
Location: Montreal
Posts: 4,206

My System Specs

Default

Quote:
Originally Posted by ipaine View Post
I guess I want to see what kind of a difference are we going to see with the 1900X. Something like 7700k with dual 1080ti's and a nvme drive vs a 1900X with dual 1080ti's and a nvme drive.
Aren't multi GPU setups directly dependent of vendor and developer implication? Meaning driver and profiles and game engine scalability? Multi GPU setups are a thing of the past by now, they are very rare, today's GPUs are powerful for most gaming needs.
__________________
Reply With Quote
  #35 (permalink)  
Old August 12, 2017, 11:10 AM
AkG's Avatar
AkG AkG is offline
Hardware Canucks Reviewer
 
Join Date: Oct 2007
Posts: 5,268
Default

They are not a thing of the past if you are doing 4K. No single card is really 'enough' with the eye candy turned on. ;)
__________________
"If you ever start taking things too seriously, just remember that we are talking monkeys on an organic spaceship flying through the universe." -JR

if your opponent has a conscience, then follow Gandhi. But if you enemy has no conscience, like Hitler, then follow Bonhoeffer. - Dr. MLK jr
Reply With Quote
  #36 (permalink)  
Old August 12, 2017, 03:21 PM
great_big_abyss's Avatar
Hall Of Fame
F@H
 
Join Date: Oct 2011
Location: Winnipeg
Posts: 2,272

My System Specs

Default

Quote:
Originally Posted by AkG View Post
Personally single thread performance is an over-rated metric (not 'can it play crysis' over-rated... but over-rated). NO modern OS only uses one thread... and if you are buying a workstation (AKA HEDT in Intel parlance)... it doesnt matter as you are doing multiple things at once, so the OS can hog a couple threads (or even a dozen) and you wont notice its overhead. Same cant be said of 6 or even 8 cores CPUs.

That is the beauty of these next gen CPUs as you dont have to futz with the OS as much to reduce its pig slow overhead issues to get back 'lost' performance at WORK tasks.

Personally I cant wait to see TR 1950X vs Core i9-7960X. 3.4 baseclock vs 2.8 baseclock should be VERY interesting come October'ish - as that is the speed the cores will be running at when used properly. Until then I can see a lot of professionals not waiting for Intel.

IMHO right now AMDs building block design may not be perfect (namely I.F. bottleneck) but it looks like AMD is ahead of the curve and can scale up faster and easier than Intel can. Only time will tell if this equates to better models. Probably better value if nothing else.

YMMV
In my job, I use a program to create a 3D model of a building, input all the floor and roof members, with all the loading from the top down. The program then analyzes each member, and its effect on neighbouring members. For most of my co-workers (who do single family houses) performance isn't an issue. However, I'm the commercial guy, so all the buildings I do are huge mult-family, 200-unit buildings, that are 3-4 storeys plus a roof. When I run the analysis, the process will take 10+ minutes, as opposed to almost instantly for everybody else.

Now, the problem is that the program we are using is highly single-threaded. I noticed that one thread on my work computer's CPU was pegged while doing the analysis, with the other 7 doing almost nothing (basic windows tasks). The CPU's that we have are i7 4790's at 4.0 GHZ. Unfortunately, it would cost a HUGE amount of money to get a very small return in single threaded performance (extreme edition), so IT isn't going to bother with an upgrade.

I am one person who cares about single threaded performance.
__________________



HTPC: Z77A-G45; 3770K; Zalman FX70; 2x4GB Kingston HyperX 1600Mhz; MSI GTX960; 2x 128GB Crucial M4 SSD; 4TB WD Red, 2x 2TB WD Green; Corsair RM650I; Corsair Carbide 600C;
Son's Rig: M5A97; FX8350; CNPS20LQ; 2x4GB Corsair Vengeance 1600Mhz; Powercolor 7950; 250GB Crucial MX200; 320GB WD Black HDD; SPI 700W; Bitfenix Shinobi;
Reply With Quote
  #37 (permalink)  
Old August 12, 2017, 04:35 PM
MARSTG's Avatar
Hall Of Fame
F@H
 
Join Date: Apr 2011
Location: Montreal
Posts: 4,206

My System Specs

Default

Quote:
Originally Posted by great_big_abyss View Post
Now, the problem is that the program we are using is highly single-threaded. .
So you have two options : change the software, or get the IT to make you a workstation with an i3-7350k and a z270 mobo.
__________________
Reply With Quote
  #38 (permalink)  
Old August 12, 2017, 06:20 PM
EmptyMellon's Avatar
Allstar
 
Join Date: Nov 2010
Posts: 526

My System Specs

Default

Quote:
Originally Posted by MARSTG View Post
So you have two options : change the software, or get the IT to make you a workstation with an i3-7350k and a z270 mobo.
The problem with using an i3-7350K is the cache size, 4MB vs i7-4790K's 8MB. From my personal experience and observation, using an under-powered CPU with "large" CAD files would be counter productive - CPU frequency is not everything when using CADD. Hence, I am now looking to upgrade my (now anemic) i7-3820, with it's measly 10MB of cache.

That being said, I don't know what CAD package g_b_a uses and how well/poorly it is optimized for the various CPU architecture features, but I would not ignore cache.

Due to the game-centrist perspective, way too many times people focus on CPU frequency, while ignoring cache size.
__________________
"All animals are equal, but some animals are more equal than others." - G. Orwell, Animal Farm
Reply With Quote
  #39 (permalink)  
Old August 12, 2017, 06:51 PM
bignick277's Avatar
Top Prospect
 
Join Date: May 2010
Posts: 169

My System Specs

Default

Quote:
Originally Posted by SKYMTL View Post
Our benchmarks show Ryzen Threadripper to be better than Broadwell-E in single thread actually:

http://www.hardwarecanucks.com/forum...-review-9.html

I think Anandtech may have used a different memory latency setup between their Broadwell-E and Ryzen setup.
They did. AnandTech only ran their tests at 2400Mhz memory speeds. They stated this in the review and said the reason is because 2400Mhz is the maximum "supported" memory speed. In other words, the maximum memory speeds AMD "officially" supports. They said they will release more data later in another article (time permitting) showing the same benchmarks but with the memory running at 3200Mhz. Even on Intel, anything over 2400Mhz on memory is technically considered an overclock.

Here's the relevant page of that article where they specifically discuss this:

AMD’s Solution to Dual Dies: Creator Mode and Game Mode - The AMD Ryzen Threadripper 1950X and 1920X Review: CPUs on Steroids

"So for the benchmarks in the review, we will have three numbers: 1920X, 1950X and 1950X in Game mode, labeled as 1950X-G. The high-DRAM numbers will be supplied in separate graphs or a separate review, time permitting"

They do however show the memory latencies measured between 2400Mhz and 3200Mhz both in UMA and NUMA modes, but not in the actual benchmarks. It's interesting data though.
Reply With Quote
  #40 (permalink)  
Old August 12, 2017, 08:20 PM
EmptyMellon's Avatar
Allstar
 
Join Date: Nov 2010
Posts: 526

My System Specs

Default

Quote:
Originally Posted by bignick277 View Post
They did. AnandTech only ran their tests at 2400Mhz memory speeds. They stated this in the review and said the reason is because 2400Mhz is the maximum "supported" memory speed. In other words, the maximum memory speeds AMD "officially" supports. They said they will release more data later in another article (time permitting) showing the same benchmarks but with the memory running at 3200Mhz. Even on Intel, anything over 2400Mhz on memory is technically considered an overclock.

Here's the relevant page of that article where they specifically discuss this:

AMD’s Solution to Dual Dies: Creator Mode and Game Mode - The AMD Ryzen Threadripper 1950X and 1920X Review: CPUs on Steroids

"So for the benchmarks in the review, we will have three numbers: 1920X, 1950X and 1950X in Game mode, labeled as 1950X-G. The high-DRAM numbers will be supplied in separate graphs or a separate review, time permitting"

They do however show the memory latencies measured between 2400Mhz and 3200Mhz both in UMA and NUMA modes, but not in the actual benchmarks. It's interesting data though.
Based on AMD's own site, the "Max Memory Support" is 2667MHz, for the current TR4s platform. While AnandTech's article states: "All CPUs were run at DDR4-2400, which is the maximum supported at two DIMMs per channel." In their setup they had 32GB (4x8GB) of memory, so why at 2400, is beyond me. Not like they had all DIMMs populated.
__________________
"All animals are equal, but some animals are more equal than others." - G. Orwell, Animal Farm
Reply With Quote
Reply


Thread Tools
Display Modes

Similar Threads
Thread Thread Starter Forum Replies Last Post
AMD Ryzen Threadripper 1950X 16-Core And 1920X 12-Core CPUs Primed To Undercut, Outpe Chess Gator CPU's and Motherboards 25 September 22, 2017 12:21 AM
No thread to comment? martin_metal_88 Suggestions & Feedback 11 July 19, 2011 07:35 AM
BFG GTX 280 OCX Review Comment Thread SKYMTL Video Cards 15 August 19, 2008 08:07 PM
ATI HD3870 X2 1GB Comment Thread SKYMTL Video Cards 126 June 7, 2008 07:11 AM