E quantos pontos dao essas WU's?
Aqui na ***** já estão uma X1900XTX Crossfire Master e uma X1950XTX a fazer o que é devido. Estou curioso de ver qual o verdadeiro boost de performance deste novo client.
Não sei se o pessoal já tinha visto estas estatísticas. São "incrivelmente" reveladoras ( não para quem já tinha lido a notícia dos 30x a 40x mais).
OS Type Current TFLOPS* Active CPUs Total CPUs
Windows 145 152672 1501579
Mac OS X 4 7630 89839
Linux 20 16944 190877
GPU 10 142 145
Total 169 177388 1782295
http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats
Basicamente o impressionante à primeira vista é que com 142 gráficas activas contra 7630 MacOSx activos , os mac's já foram ultrapassados em poder e daqui a pouco é a vez do linux. 8o
Não sei se as estatíscticas são reais, mas como estão no site oficial já é o suficiente para ter credibilidade.
*TFLOPS is actual flops from the software cores, not the peak values from CPU specs. GPU clients are now currently included in the FLOP accounting (but these numbers only reflect WU's that have been returned).
Folding@Radeon
First among them was Vijay Pande of Stanford University, Professor of Chemistry and Director of the Folding@Home project. TR readers should be very much familiar with Folding, since we field one of the top ten Folding teams in the world. Pande was there to talk about the new beta Folding client that uses the GPU. Currently, it only runs on newer Radeons, where it shows big performance increases—between 20 and 40 times the speed of a CPU. Pande said the client is presently achieving around 100 gigaflops per GPU. To give some perspective, he then demonstrated the graphical versions of the CPU and GPU clients side by side, and the GPU version showed constant motion, while the CPU one chunked along at a few frames per second.
This particular implementation of stream computing has now gone live. The FAH project released the first beta of the client to the public earlier this week.
I talked with Pande about the possibility of a Folding client for Nvidia GPUs, and he had some interesting things to say. The Folding team has obviously been working with Nvidia, as well as ATI. In fact, Pande said Nvidia has their code and is running it internally. At present, though, ATI's GPUs are about eight times as fast as Nvidia's. He was hopeful Nvidia could close that gap, but noted that even a 4X gap is pretty large—and ATI is getting faster all of the time.
The bottom line for Pande and his colleagues, of course, is how Folding on a GPU can further research about diseases like Parkinson's and Alzheimer's. Pande characterized the move to GPU Folding as one that opens new possibilities.
Orton pegged the floating-point power of today's top Radeon GPUs with 48 pixel shader processors at about 375 gigaflops, with 64 GB/s of memory bandwidth. The next generation, he said, could potentially have 96 shader processors and will exceed half a teraflop of computing power.
Estou ansioso para que sai a PS3 para vermos números comparativos e ver qual é melhor nesta fase, sabendo que a PS3 é um sistema que vai ficar fechado e as gráficas vão continuar a evoluir pelo menos de 6 em 6 meses.
A ver vamos como diz o ceguinho
O Cell da PS3 tem 250 e tal Gflops (prec. simples). Bem aproveitadinha e com a eficiência que se estima (nas gráficas é 1/4, no cell pode ser 2/3) talvez a performace real fique entre 150 a 200 Gflops. Ou seja, 2x a performance efectiva das actuais GPUs. Por outro lado a PS3 tem uma gráfica com 1 Teraflop de capacidade computacional e se também a usarem para folding creio que se pode tirar mais uns 200/250 Gflops. Se se avançar para o Cell+gráfica na PS3 é possível roçar os 500 gflops, ou seja, meio Teraflop. Mas isto talvez só daqui a 1 ano. Até lá chegam os novos GPUs.
Sim, o cliente que faz uso de SMP é muito bem vindo...Um cliente que comunique com os vários cores dos cpus ao contrario do que acontece hoje em que tem que se usar um cliente por cada core.
Não tenho seguido os pormenores mas o que é dito das graficas para PC é que apesar de eles terem o folding a correr em placas da Nvidia elas neste momento são 8X mais lentas do que as ATI.
Metro, responde-me a isto se souberes:
A PS3 vai ter ligação à internet?
Pocas
Por vezes não é serem mais lentas por falta de poder computacional. O problema que se coloca em GPGPUs é a dificuldade de os pôr a fazer coisas para os quais não foram desenhados. É melhor pensar desta forma. Os GPUs da ATi são para já mais "amigáveis" que os da NVidia mas nem sempre foi assim mas o objectivo de ambos é atacar o stream computing pelo que mais cedo ou mais tarde é ela por ela.
We have been examining the GPU Core and how it runs on donor machines. After some initial core problems, it appears that the code is working well for the most part (with just some cosmetic issues, which we are working on). Donors have had success with both the 6.5 and 6.10 ATI drivers and so we suggest that if one is giving problems, to switch to the other.
The primary issue now appears to be the CPU use of the GPU core. Due to how graphics drivers work in Windows, the CPU must poll to see if the GPU has completed. This polling is very CPU intensive (as the GPU does complete its work fairly quickly). We are working on a fix to this, but it is also likely that future GPU cores may use CPU power for scientific calculations which cannot be run on the GPU.
Thus, we are asking donors who run the GPU core to leave some CPU power (~1 core) available for GPU's to use. We need to compensate donors for this additional use of resources, so the points have been increased. The very idea of a GPU core and GPU software is new, so we are still working out what's the best way to handle these issues, but in general, we will of course award points based on the hardware used -- more hardware used, more points. As we develop the GPU core, the points may need to be changed (possibly up, especially if more CPU is used, possibly down if essentially no CPU is needed).
We'd also like to thank all the beta testers who have given us great feedback. It's still very early for our GPU core, but the future is already looking very bright.