Go Back   Hardware Canucks > SOFTWARE > Gaming

    
Reply
 
LinkBack Thread Tools Display Modes
  #1 (permalink)  
Old May 21, 2009, 09:09 PM
Top Prospect
 
Join Date: Mar 2007
Location: West Coast Canada
Posts: 175
Default What gets you more FPS, Higher GPU Core Clock or Memory

I have been trying different Overclocks today, And so far when I un-link my shaders and keep them at 1512Mhz, I can get my Core Clock up to 730Mhz and still climbing. But what is going to give me better gaming performance, A higher Core Clock with lower shaders? Or Linked shaders and a higher Mem clock?
Cuz so far, I just got an extra 30 Mhz on my Core from leaving my Shadrs at 1512Mhz!! And I think I still have some more room to go higher but will I see any more FPS rom games or not?I will have run some basic 3DMark06 Benchies of the four different GPU speeds to find out.(When I have some time!!)
1.default speeds
2.Max Core and shaders Linked with Stock Mem
3.Max Mem with Stock Core and shaders linked
4.Max Core with shaders unlinked and stock Mem
Please can someone help me with this query? And please try and base your answers on facts and include Links to help support your claim if possible?
Thanx
__________________
System spec: Intel i7 2600k, Gigabyte GTX 570 Super Overclock, Kingston DDR3 2X4 GB sticks, ASUS P8P67 PRO, Cooler Master 700 watt PSU, 750GB*2 Sata,SATA III 1TB HDD Acer 24 Widescreen, Antec 300, Windows 7 Ultimate 64 Bit
Reply With Quote
  #2 (permalink)  
Old May 21, 2009, 10:06 PM
MpG's Avatar
MpG MpG is offline
Hall Of Fame
 
Join Date: Aug 2007
Location: Kitchener, ON
Posts: 3,144
Default

Unfortunately, there is no clear answer, and it will vary from game to game, from setting to setting. The game and its settings will determine how much of the video card's memory is actually needed to hold the necessary data, and the memory speed determines how fast the data can be retrieved and sent to the core. The core/shaders will take that data and execute calculations based upon it.

In one extreme, the core will need to do very complex calculations, and the memory bus will have no trouble keeping the core fed with data to work with. In the other extreme, the calculations will be very simple, and the core will finish long before the memory bus can manage to get the next set of data sent. In the former case, speeding up the core and shaders will net the most benefit, and increasing the memory speed will accomplish very little. In the latter case, vice versa. Most real situations fall somewhere in-between.

The exact situation will vary from card to card, depending on the exact specs and how they balance each other. I know my own GTX280 quit showing very much 3DM6 improvements after about 1300Mhz on the memory, but it's also got a 512-bit memory bus. A GTX275 with a similar core, but narrower bus might have kept benefitting from higher speeds still. The HD4870's were often noted to stop showing improvements after the memory hit about 2000MHz. And so on. Nothing but case by case testing is going to give a definite answer, I'm afraid.
__________________
i7 2600K | ASUS Maximus IV GENE-Z | 580GTX | Corsair DDR3-2133
Reply With Quote
  #3 (permalink)  
Old May 21, 2009, 10:20 PM
rjbarker's Avatar
Hall Of Fame
 
Join Date: Feb 2008
Location: Courtenay, B.C
Posts: 5,932

My System Specs

Default

^^^^ Nice explanation......Personally I tend to "unlink" and bump up all three (Core / shaders / Memory) as I go along, wouldn't be big on Maxing one out and leaving another behind...
I test as I go...nothing outrageous, simply play my games and monitor Temps / watch for Artifacts etc.....
__________________
Introducing me n my OCD to Watercooling, is like taking an Alcoholic to an "all you can drink" Beach Bar in Mexico

.
Reply With Quote
  #4 (permalink)  
Old May 21, 2009, 10:29 PM
Realityshift's Avatar
Hall Of Fame
F@H
 
Join Date: Feb 2009
Location: Fort McMurray, AB
Posts: 2,486
Default

You ever find that unlinking shaders from the core gains you any extra performance Rj? everytime ive tested it ive seen no change =( so now I just leave the 2 linked.
__________________
The Builders Hammer
Asrock P67 Fatal1ty, Intel i5 2500K @ 5.1ghz, 16gb HyperX 1600 CL8, Evga GTX 580 SC SLIed and water-cooled, HX850, Crucial M4 64gb, 640 black, 1tb + 3x 2tb greens, Haf X.


BF BC2: Bishop_SHIFT, Steam: Realityshift84, Xbox Live - SHIFTEDone84 ... Come play.
Reply With Quote
  #5 (permalink)  
Old May 21, 2009, 10:35 PM
rjbarker's Avatar
Hall Of Fame
 
Join Date: Feb 2008
Location: Courtenay, B.C
Posts: 5,932

My System Specs

Default

Quote:
You ever find that unlinking shaders from the core gains you any extra performance Rj?
Not sure really, I've always unlinked and moved individually.....can't remember now, but I know I reached a point whereby if I move me Shaders too much higher than 1458 Mhz...boom..artifacts....but I wanted my Memory bumped higher. So I started playing around leaving my memory at 1458 Mhz and "seperately" moving my Memory higher...along with aggressive bumps in Core....until I find my "happy place"....Max allowable Temps and no artifacts when gaming (likely bugger up in Furmark but I don't "play" Furmark)!
__________________
Introducing me n my OCD to Watercooling, is like taking an Alcoholic to an "all you can drink" Beach Bar in Mexico

.
Reply With Quote
  #6 (permalink)  
Old May 22, 2009, 12:30 AM
Realityshift's Avatar
Hall Of Fame
F@H
 
Join Date: Feb 2009
Location: Fort McMurray, AB
Posts: 2,486
Default

I normally just keep them linked, bump up core till it artifacts, bump it down a notch and then do the same for memory, I tend to get decent performing overclocks like that and never seem to run into heat issues. Although now that I have a GTX 2xx I might just try messing around with the voltages ;) nice of evga to allow us to do that via software.
__________________
The Builders Hammer
Asrock P67 Fatal1ty, Intel i5 2500K @ 5.1ghz, 16gb HyperX 1600 CL8, Evga GTX 580 SC SLIed and water-cooled, HX850, Crucial M4 64gb, 640 black, 1tb + 3x 2tb greens, Haf X.


BF BC2: Bishop_SHIFT, Steam: Realityshift84, Xbox Live - SHIFTEDone84 ... Come play.
Reply With Quote
  #7 (permalink)  
Old May 22, 2009, 12:59 AM
MpG's Avatar
MpG MpG is offline
Hall Of Fame
 
Join Date: Aug 2007
Location: Kitchener, ON
Posts: 3,144
Default

There are a number of components who work off the 'Core' clock, and it only takes one of them to limit an overclock. So it's possible that unlinking the core and shaders might allow you to push the shaders a little extra more. Or vice versa. With my own card, when linked, the shaders crapped before the rest of the core did, so unlinking things let me push the rest of the core a little extra. Not much benefit, since things are getting pretty shader dependant these days, but every bit helps.
__________________
i7 2600K | ASUS Maximus IV GENE-Z | 580GTX | Corsair DDR3-2133
Reply With Quote
  #8 (permalink)  
Old May 22, 2009, 06:33 AM
CanadaRox's Avatar
Allstar
F@H
 
Join Date: Feb 2008
Location: Scarborough (Toronto)
Posts: 614
Default

"Way" back when I had my 8800GTX, I found I got the best overclock by having them all unlinked and then overclocking each one to its max starting with the core, then shader, then memory. BY unlinking the shader I actually got close to 150MHz extra on it over what it would have been had I left them linked.

And for the original question... I generally find that a higher core clock is most important, followed by shader and then memory. There are situations where this isn't true, so if you are going to overclock, you might as well do them all.
__________________
Project: Black and White
i7 920 D0 | 3 x 2GB DDR3 | EVGA X58 SLI LE
XFX 4890 | Corsair HX750 | Corsair Obsidian
Reply With Quote
Reply


Thread Tools
Display Modes

Similar Threads
Thread Thread Starter Forum Replies Last Post
[FS] Ants PC parts sale: PSU/GPU/MEMORY/HD ..'Ant'.. Buy/Sell & Trade 3 March 23, 2009 05:52 PM
New GPU core released for ATI cards LCB001 HardwareCanucks F@H Team 10 January 7, 2009 09:18 PM
AMD GPU Clock Tool 0.9.8 Available MAC Press Releases & Tech News 0 July 7, 2008 07:50 PM
Is the R700 suppost be like dual core gpu supernode Video Cards 33 February 19, 2008 03:39 PM
AMD GPU Clock Tool v0.7 released Babrbarossa CPU's and Motherboards 2 May 4, 2007 03:35 PM