Geforce 8800

Status
Fechado a novas mensagens.
Sairam hoje novas drives nvidia 163.69.

Supports GeForce 6, 7, and 8 series GPUs
Improved compatibility and performance for NVIDIA SLI™ technology on DirectX 9, DirectX 10, and OpenGL applications.
Improved compatibility for The Way It’s Meant To Be Played™ game titles: BioShock, Crysis, Enemy Territory: Quake Wars, Hellgate: London, and World in Conflict.
Added HD video post-processing for H.264, VC-1 and MPEG-2 HD content on the GeForce 8500 and 8600 series GPUs.
Numerous game and application compatibility fixes


Link: http://www.nvidia.com/object/winvista_x86_163.69.html

Sairam para Vista 32/64 bit e winxp 32/64bit.
Yeap parecem ser boas, diz la no site que estao para sair umas wqhl destas "WQHL soon" :)
Se souberem do release notes postem aqui.
 
XFX selling Fatal1ty-branded GeForce 8800 GTS

Manufacturers can't get enough of professional gamer (BEd: Is he still playing?) nicknamed Fatal1ty and XFX in particular likes putting his face on overclocked graphic cards. The newest Fatal1ty-branded card is one that certainly get the attention as it is a 320MB GeForce 8800 GTS, the most popular (and the cheapest) G80 offer. The Fatal1ty 8800 GTS is the highest-clocked version around and comes with a core clock of 650MHz (stock is 500MHz), shaders set to 1500MHz (1200MHz stock) and memory at 2GHz (1600MHz stock).

The card seems to have the stock cooler on but XFX really went all the way on this one as getting to 650MHz with the 90nm G80 is 'something special'. The Fatal1ty 8800 GTS is available in limited supply but no price tag has been revealed.


XFX_Fatal1ty_8800GTS_01.jpg


http://www.tcmagazine.com/comments.php?shownews=16100&catid=2

Cumps
 
Já é possivel fazer OC do Shader separadamente usando o rivatuner 2.04 e os drivers 163.67 :D

I can see this subject bringing up a number of very FAQs so here's a rough guide to try to head some of them off and to stop Unwinder from banging his head against a wall :-)

NVIDIA G80 based GPU shader clock speed adjustment using 163.67 drivers and RivaTuner 2.04
======================================================================

Overview/Background
---------------------

Prior to NVIDIA driver release 163.67, the shader clock speed was linked to the core clock (aka ROP domain clock) speed and could not be changed independently by itself. The relationship between core and shader domain clock speeds (for most cards) is shown in table A. Some cards have slightly different set freq vs resultant core/shader speeds so take the table as an illustration of how the shader clock changes with respect to the core clock rather than precise values. To overclock the shader speed it was necessary to flash the GPU BIOS with a modified version that sets a higher default shader speed.

By way of an example, my 8800 GTS EVGA Superclocked comes from the factory with BIOS programmed default core and shader speeds of 576 and 1350, respectively. When increasing the core speed, I found 648 to be my maximum stable speed. From table A, you can see that with a core of 648 the maximum shader speed (owing to the driver controlled core/shader speed linkage) is 1512. To push it higher you increase the BIOS set shader speed. For example, with a BIOS set to core/shader 576/1404 (from 576/1350), all linked shader speeds are bumped up by 54MHz. So now when increasing the core to 648, the maximum shader speed becomes 1512+54=1568. I eventually determined my maximum stable shader speed to by 1674 (achieved with GPU BIOS startup speeds set to 576/1512; overclocking core to 648 now yields a shader speed of (1512-1350)+1512=1674).

However, as of NVIDIA driver release 163.67, the shader clock can now be modified independently of the core clock speed. Here is the announcement by Unwinder:

"Guys, I've got very good news for G80 owners. I've just examined overclocking interfaces of newly released 163.67 drivers and I was really pleased to see that NVIDIA finally added an ability of independent shader clock adjustment. As you probably know, with the past driver families the ForceWare automatically overclocked G80 shader domain synchronicallly with ROP domain using BIOS defined Shader/ROP clock ratio. Starting from 163.67 drivers internal ForceWare overclocking interfaces no longer scale shader domain clock when ROP clock is adjusted and the driver now provides completely independent shader clock adjustment interface. It means that starting from ForceWare 163.67 all overclocking tools like RivaTuner, nTune, PowerStrip or ATITool will adjust ROP clock only.
However, new revisions of these tools supporting new overclocking interfaces will probably allow you to adjust shader clock too. Now I've played with new interfaces and upcoming v2.04 will contain an experimental feature allowing power users to definie custom Shader/ROP ratio via the registry, so RT will clock shader domain together with ROP domain using user defined ratio.
And v2.05 will give you completely independent slider for adjusting shader clock independently of core clock.

Note:

By default this applies to Vista specific overclocking interfaces only, Windows XP drivers still provide traditional overclocking interface adjusting both shader and ROP clocks. However, XP drivers also contain optional Vista-styled overclocking interfaces and you can force RivaTuner to use them by setting NVAPIUsageBehavior registry entry to 1."

Two big points of note here:
*) The driver's new overclocking functionality is only used *by default* on Vista. Setting the rivatuner NVAPIUsageBehaviour registry entry to 1 will allow XP users to enjoy the new shader speed configurability.
*) With the new driver interface, by default, the shader speed will not change AT ALL when you change the core speed. This is where the use of RivaTuner's new ShaderClockRatio registry value comes in (see below). It can be found under the power user tab, RivaTuner->Nvidia->Overclocking.


Changing the shader clock speed
--------------------------------

On to the mechanics of the new ShaderClockRatio setting in Rivatuner 2.04. Here's more text from Unwinder:

"Guys, I’d like to share with you some more important G80 overclocking related specifics introduced in 163.67:

1) The driver’s clock programming routine is optimized and it causes unwanted effects when you’re trying to change shader domain clock only. Currently the driver uses just ROP domain clock only to see if clock generator programming have to be performed or not. For example, if your 8800GTX ROP clock is set to 612MHz and you need to change shader domain clock only (directly or via specifying custom or shader/ROP clock ratio) without changing current ROP clock, the driver will optimize clock frequency programming seeing that ROP clock is not changed and it simply won’t change the clocks, even if requested shader domain clock has been changed. The workaround is pretty simple: when you change shader clock always combine it with ROP clock change (for example, if your 8800GTX ROP clock is set to 612MHz and you’ve changed shader clock, simply reset ROP clock to default 576MHz, apply it, then return it to 612MHz again to get new shader clock applied). I hope that this unwanted optimization will be removed in future ForceWare, and now please just keep it in mind while playing with shader clock programming using RT 2.04 and 163.67.
2) Currently Vista driver puts some limitations on ROP/shader domain clocks ratio you’re allowed to set. Most likely they are hardware clock generator architecture related and hardware simply cannot work (or cannot work stable) when domain clocks are too asynchronous. For example, on my 8800GTX the driver simply refuses to set the clocks with shader/ROP ratio within 1.0 – 2.0 range (default ratio is 1350/575 = 2.34), but it accepts the clocks programmed with ratio within 2.3 – 2.5 range. Considering that the driver no longer changes domain clocks synchronically and all o/c tools (RT 2.03, ATITool, nTuner, PowerStrip) currently change ROP clock only, that results in rather interesting effect: you won’t be able to adjust ROP clock as high as before. Once it gets too far from (or too close to) shader clock and shader/ROP clock ratio is out of range – the driver refuses to set such clock. Many of you already noticed this effect, seeing that the driver simply stops increasing ROP clock after a certain dead point with 163.67."

and

"In the latest build of 2.04 (2.04 test 7) I've added an ability of setting ShaderClockRatio to -1, which can be used to force RivaTuner to recalculate desired Shader/ROP ratio automatically by dividing default shader clock by default ROP clock.
So if you set ShaderClockRatio = -1 and change ROP clock with RT, it will increase shader clock using you card's BIOS defined ratio (e.g. 1350/576=2.34 on GTX, 1188/513 = 2.32 on GTS etc). If you wish to go further, you may still override the ratio, for example increase shader clock by specifying greater ratio (e.g. ShaderClockRatio = 2.5)."


Three important points here:
*) The driver currently imposes restrictions on how far the shader clock speed can be changed from what it otherwise would've been when linked to the core clock speed in old drivers (it is suspected that the restriction is owing to hardware limitations rather than a driver software design choice). This means you can't set an arbitrary shader speed which you know your card is capable of and necessarily expect it to work.
*) Setting the ShaderClockRatio to the special value of -1 will give you a very similar core / shader speed linkage that you had under previous drivers (163.44 and older).
*) When the change the value of ShaderClockRatio, in order for it to take effect, you must make a core speed. So, for example, you might reduce the core speed a little, apply and then put it back to how it was and apply again.



Worked example
----------------

Surprise surprise, back to my EVGA 8800 GTS superclocked! .. First off, if you've not already done so, I recommend setting up RivaTuner monitor to show the core clock, shader clock and memory clock speeds so that you can immediately tell if your core/shader clock changes are having any effect. My setup is vista with 163.67 drivers. With RivaTuner 2.03, when overclocking the core to 648, the shader would now stick at the bootup default speed of 1512 MHz (see last paragraph of "Overview/Background" above). If I had blindly run 3dmark2006 tests after installing the 163.67 driver, I would've assumed that the new drivers give worse performance but the rivatuner graphs show you that the shader is not running at the expected speed.

After installing RivaTuner 2.04, we are now able to set the ShaderClockRatio value to restore a higher shader clock speed. In my case since I want a shader speed of 1674 when the core is 648, I use 1674/648 = 2.58.


=======================


Table A
--------

Some cards have slightly different set freq vs resultant core/shader speeds so take the table as an illustration of how the shader clock changes with respect to the core clock rather than precise values.

Código:
Set core  | Resultant frequency
frequency |   Core  Shader
---------------------------------
509-524   |   513   1188
525-526   |   513   1242
527-547   |   540   1242
548-553   |   540   1296
554-571   |   567   1296
572-584   |   576   1350
585-594   |   594   1350
595-603   |   594   1404
604-616   |   612   1404
617-617   |   621   1404
618-634   |   621   1458
635-641   |   648   1458
642-661   |   648   1512
662-664   |   675   1512
665-679   |   675   1566
680-687   |   684   1566
688-692   |   684   1620
693-711   |   702   1620
712-724   |   720   1674
725-734   |   729   1674
735-742   |   729   1728
743-757   |   756   1728
in http://forums.guru3d.com/showthread.php?t=238083
 
Instruções simples:

Para XP:

1-Ir a "Power user" (dentro do Rivatuner 2.04)

2-Ir a "RivaTuner\System" e alterar o valor de "NVAPIUsageBehavior" para 1

3-Ir a "RivaTuner\Nvidia\Overclocking" e meter o valor pretendido para o multiplicador do shader clock em "ShaderClockRatio" (Ex: 2.5(multi) x 600(core clock) = 1500 (shader clock)

4-Reiniciar o PC

No Vista não é preciso o passo 2.

De qq maneira aconselho a lerem a explicação anterior!
 
Última edição:
borbs podes indicar onde mudar o multiplicador??(ver se não faço asneira como fiz com o registo do windows uma vez:D)

DHM é super simples alteras o valor do Shader clock aqui

semttulokz6.jpg



Depois fazes as contas mediante o Core Clock por exe. 600 Core x 2.5 = 1500 Shader Clock e por ai adiante , aqui funciona ás mil maravilhas :)

semttuloss5.jpg


Cumps
 
Última edição:
Tens o Rivatunner 2.04 ?

Pareçe-me ver o 2.02 , mas tambem não tenho a certeza se as 8600 teem shader clock, tenta ver com o plugin do rivatunner no hardware monitoring .
 
Sim tem a minha não passa dos 1516Mhz penso eu(core clock não passa dos 670:()
já a sacar o Riva;) depois aviso se lá aparece a opção;)
 
Última edição:
Boas ppl, alguma manha do Counter Strike Source perante a 8800GTX ? e' que a' dias estive a ver umas cenas.. e eu tenho uns fps miseros comparado com outros sistemas iguais ao meu :| e nao sei o que raio e'.. os outros jogos.. tudo worka na linha... o Counter Strike Source.. tenho umas kedas de FPS loucas.. mts vezes estou em 180 ou 200.. e baixo de queda livre para os 100 ou 80.. e se tiver mt pardal a' solta.. vou ate' aos 80 fps... que raio sera'? :\ ja' corri foruns e etc.. e nada :s vejo so problemas e problemas.. mas soluçoes.. niente.
Alguem tem o mesmo problema? ou sabe a soluçao? :\ e' que estou a dar em paranoico xD
Vou desinstalar tudo o que tenho de Counter Strike e Half Life.. e ver se melhora algo.. e' que vejo ppl a ter FPS estabilizados.. assim.. coisa de 250 fps a cantar.. e nada de kedas livres.. eu aqui parece saltos de queda livre.. :\

Cumprimentos
 
Última edição:
Nope.. mas vou fazer.. quando me der na telha.. mas o OC tem algo haver? e' que deveria ficar estavel.. digo eu, poderia ter menos fps.. mas naquela...
E' que e' algo.. puxado ainda :\
 
Nope.. mas vou fazer.. quando me der na telha.. mas o OC tem algo haver? e' que deveria ficar estavel.. digo eu, poderia ter menos fps.. mas naquela...
E' que e' algo.. puxado ainda :\

Não o problema do Source é ser um motor muito dependente do CPU.
Ganham-se com OC por exemplo de 2.4GHz para 3.0GHz ganhas uma média de 90 frames com uma 8800GTX. E o melhor que tens a fazer é bloquear os frames aos 100 frames, assim estão sempre estáveis.
 
A mim aconteceu-me algo parecido devido ao dynamic speed throttling do cpu (que estava constantemente a oscilar) e devido ao powermizer q tava a mudar constantemente a velocidade de clock da minha 8600m... curiosamente isto apenas acontece no hl2 e source... desactivando o q referi funciona tudo 5* nos 60 fps (vsync :P) com AA 8X...

[]
 
Última edição:
Status
Fechado a novas mensagens.
Back
Topo