Graphics cards beats previous record that used 1,000 CPUs
A researcher has broken the record for calculating the mathematical constant Pi, by using a number of computers running with Nvidia graphics cards. Ed Karrels of Santa Clara University used the GPUs in the graphics cards instead of normal computer processors in order to calculate pi to eight-quadrillion places.
NVIDIA drops proprietary lock on CUDA tech
NVIDIA took a major step towards spurring growth of its CUDA general-purpose code technology for video cards on Tuesday by posting the CUDA source code. Developers and education now have access to a variant on the LLVM compiler that will let them add new processor types and languages. The extension could see CUDA run on AMD's Radeon hardware, Intel's integrated graphics, and even use relatively old code like Fortran.
Card said to have dual graphics circuits
NVIDIA's is reportedly set to introduce a new flagship graphics card, which is said to carry the label GeForce GTX 590. Corroborating earlier leaks, the card will allegedly integrate dual GF110 GPUs with a combined 1024 CUDA cores and 3GB of memory, sources have told NordicHardware. The latest information suggests separate leak, which was tied to the name GTX 595, may have been an prototype of the 590.
NVIDIA GeForce GTX 570 official at mid-range
NVIDIA today confirmed its second-ever GeForce 500 series chipset in a push to bring its new graphics to the mainstream. The GTX 570 has the exact same 480 cores as the old GTX 480 but, through the refined architecture, runs at a higher 732MHz main clock speed, 1.46GHz clock for each core and a 1.9GHz memory clock. It uses a narrower 320-bit memory interface (down from 384 bits) but, due to the combined improvements, has a higher texture fill rate bumped up from 42 billion to 43.9 billion pixels per second.
NVIDIA GeForce GT 540M ramps up notebook video
NVIDIA overnight quietly brought out its first 500-series GeForce notebook graphics. The GeForce GT 540M like the GTX 580 is primarily a clock speed increase with an increase in its main and effects core clock speeds to 672MHz and 1.34GHz each. It shares the GeForce GT 435M's 96 cores and 128-bit memory bandwidth.
Microsoft wants to patent GPU video encoding
A US patent filing published today has raised concerns as it could give Microsoft control of hardware-accelerated video encoding. A continuance of an application for "accelerated video encoding using a graphics processing unit" would cover the common technique of calculating motion for video processing with the video chipset in a computer rather than the regular, usually slower main CPU. Its techniques are broad and cover tricks like turning each frame into macroblocks to process them in parallel.
NVIDIA GeForce GT 430 official
NVIDIA today confirmed its rumored entry-level video chipset, the GeForce GT 430. The design is targeted at home theater computers and other systems where video playback support is more important than raw 3D performance and now supports HDMI with passthrough audio. It can output HDMI 1.4a for full 3D video along with Dolby TrueHD and DTS-HD Master Audio surround sound.
NVIDIA CUDA to run natively on x86 chips
NVIDIA chief Jen-Hsun Huang today at its GPU Technology Conference said his company would bring its CUDA general-purpose computing language directly to x86 chips. The approach developed with the Portland Group will let systems without NVIDIA cards handle the code. It will work best with multi-core processors and is seen as ideal for servers.
NVIDIA GF GT 425M surfaces in ASUS notebook
NVIDIA's first mainstream GeForce 400 notebook chipset, the GT 425M, has surfaced in leaks earlier this month. Semi-Accurate noted that the graphics core has been listed as showing in a 17-inch ASUS notebook, but with different features. Some list it as only a DirectX 10 chip, implying that it's only a refreshed 300M chip, while others mention DirectX 11 and that it would use a version of the Fermi architecture in the GTX 480M.
NVIDIA puts out GTX 460 at ideal price
NVIDIA today brought out its first more frugal GeForce 400 series chipset in the form of the GeForce GTX 460. The recently leaked hardware uses the new 40 nanometer, GF104 chipset and is actually in some areas faster than the GTX 465. It has fewer stream (visual rendering) processors, at 336 versus 352, but has more texture addresses at 56 compared to 44; it also has a faster clock speeds across the board with a 675MHz core, 1.35GHz shader (effect) clock speeds, and a 900MHz actual speed for its GDDR5 memory.
GeForce GTX 465 hits mainstream pricing
NVIDIA today launched its first mainstream graphics chipset in the Fermi family with the GeForce GTX 465. The chipset scales back from the GTX 470 with 352 visual effect (stream) processors versus 448, 32 render output processors versus 40, and slightly slower 802MHz GDDR5 memory on a 256-bit bus. It still runs at the same clock speed and carries the full feature set, which gives it DirectX 11 (and OpenGL 4) features as well as higher-performance general computing in CUDA, PhysX and OpenCL.
NVIDIA claims GTX 480M wins mobile speed crown
NVIDIA today started off its GeForce 400M notebook graphics line by launching the series' highest-end model. The GeForce GTX 480M has almost three times as many shader (effects) cores as the 285M it replaces at 352 and carries the hardware tessellation, cache and other changes that make it a generational leap. NVIDIA claims that the 480M can be as much as five times faster than ATI's Mobility Radeon 5000 series in graphics duties and 10 times faster encoding video when using technology like CUDA or OpenCL.
Requires NVIDIA-based GPUs
New Zealand's Refractive Software has released a beta of Octane Render, a new rendering tool for Mac, Windows and Linux systems. The software is notable for exploiting NVIDIA's CUDA architecture, which accelerates ray tracing on systems with one or more NVIDIA videocards. Octane can run 10 or more times faster than apps using CPU-based ray tracing, Refractive claims.
GeForce GTX 480 and 470 finally official
NVIDIA at PAX East tonight finally released the first video chipsets based on its Fermi architecture. The top-end GeForce GTX 480 leads the group and is billed as the "fastest GPU in the world:" it has 480 visual processing cores, 16 geometry units and four raster units that combined should beat the ATI Radeon HD 5800 series in real-world tests. It also has major optimizations to multi-card SLI that produce a 90 percent speed boost with a second card, making the case for multiple GeForce 400 series cards in high-end systems.
NVIDIA GTX 480 would run faster overall
NVIDIA late yesterday put up a video (viewable below) that provides some of the first official performance indicators for the GeForce GTX 480. In a synthetic test of Uniengine, a 3D engine that supports DirectX 11 (OpenGL 3.2) features, the GTX 480 was tied in maximum frame rates with the ATI Radeon HD 5870 but is much faster under high stress, often producing over 40 frames per second where the AMD-made rival produces a noticeably choppier 20. The test isn't necessarily reflective of real gameplay, and is the same demo Electronista saw at CES, but shows the card likely being faster for games that use techniques like tessellation to dynamically add detail.
NVIDIA Fermi cards only for loyal firms initially
NVIDIA's GeForce GTX 470 and 480 will only be available in short supply and at very high prices, a leak from within the graphics field claims. Although the company has promised a launch on March 26th at PAX East, boards based on the new architecture will reportedly only be available through the companies most loyal to NVIDIA and don't sell any ATI-branded graphics. The sources allege that "second-tier" third parties that sell both brands will have to wait until sometime in April.
System utilizes CPU and GPU resources
Bunkspeed has announced that its upcoming SHOT software will integrate Iray rendering technology developed by Mental Images. The new system will allow the software to process renders using a combination of CPU and GPU resources, including NVIDIA's CUDA-equipped graphics cards. Offloading the rendering tasks to various components is said to significantly reduce processing times.
GeForce 300M uses 40nm, more shaders
Although it outlined some products at CES, NVIDIA has now formally detailed its GeForce 300M graphics line. The new series still uses a DirectX 10-level (OpenGL 2.1) architecture but is more efficient, in some cases using as many as 50 percent more shader (visual effect) cores. The series continues to provide full hardware video decoding and supports general-purpose computing like CUDA and OpenCL.
GeForce GT 240 updates NV's budget GPUs
NVIDIA in a low-key move today launched the GeForce GT 240. The chipset brings performance from the mid-level to sub-$100 cards and uses the newer 40 nanometer manufacturing process to make itself a reasonable fit in budget PCs: its low energy use both helps it occupy only one slot and to run entirely off the power of the PCI Express bus instead of needing a separate power connector.
NVIDIA 20 series Teslas appear
NVIDIA today provided details of the first official hardware to use its upcoming Fermi architecture. The Tesla 20 series is even more optimized for general-purpose computing standards like OpenCL or NVIDIA's own CUDA and handles complex math that previously hasn't been as practical, such as ISO standard double-precision math and C++ code processing. Unlike past models, though, the card model also has a video output and works as a video card rather than just as a companion device.
New version supports up to 256 cores
The Portland Group has announced the 2010 version of its PGI compilers and development tools. The new release is the first to fully support the PGI Accelerator Programming model 1.0 standard for x64 processor systems that use NVIDIA CUDA-enabled GPUs. CUDA Fortran, which is the primary GPU interface, is also now included to give developers direct control of NVIDIA GPUs.
NVIDIA Fermi with Snow Leopard in mind
The just-unveiled Fermi graphics architecture will find its way into Macs and play an important role in Mac OS X Snow Leopard, NVIDIA chief scientist Bill Dally said today. While it's expected that NVIDIA would continue to play an important part of future Macs, the researcher drew a particular connection between the new GPU design and Apple's new OS, expecting that it would provide a significant boost for those apps that implement OpenCL. Windows 7 will also get support through DirectX 11 and DirectCompute.
NVIDIA this evening provided an early look at the next generation of its graphics processors. Nicknamed Fermi, the architecture for future GeForce, Quadro and Tesla chipsets will jump from 240 cores to a much larger 512 and should be much faster in each core courtesy of some industry-first techniques. Fermi chips will be the first GPUs to have a real cache hierarchy, with Level 1 caches to keep specific information on hand and a single, shared Level 2 cache for larger tasks; they will also have a new GigaThread engine that can transfer data in both directions at once and handle "thousands" of tasks at once.
Translates for supporting video cards
NVIDIA has announced that a public beta release of the PGI CUDA-enabled Fortran compiler is now available for download. The compiler was developed by NVIDIA and the Portland Group, and is billed as the first Fortran compiler that works with CUDA-enabled GPUs. The compiler translates the Fortran programming language into binary code that CUDA GPUs can interpret.
NVIDIA Ion Sequel Speedup
The sequel to NVIDIA's Ion platform could be much faster than its predecessor when it ships, leaks today would show. While the current graphics and chipset combo is based on the GeForce 9400M and has just 16 visual effects cores, Fudzilla now hears that it will have 32 cores, potentially doubling the amount of simultaneous effects it can handle at once. The difference will have the most dramatic effect on 3D but could also impact general-purpose computing tasks that need CUDA or OpenCL.
Apple ARM Cortex Job
Apple has posted a job listing that hints at the company's future hardware direction for the iPhone, iPod touch and possible other devices. The position for a High Perform/Low Level Programmer asks for someone familiar not only for programming assembly-level code for ARM processors, which Apple already uses in its handhelds, but for the NEON vector math units used in the newer Cortex architecture for the mobile chips. Apple is especially concerned about experience with vector math and particularly values anyone with additional knowledge of vector units through general CPUs, such as Intel's SSE or the AltiVec units found on PowerPC G4 and G5 cores.
Mac mini NVIDIA Ion Rumor
Apple's Mac mini desktop will switch to NVIDIA's Ion platform and Intel's Atom chip as a result, an alleged confirmation by an NVIDIA partner asserts. The Cupertino company is believed to be using the combination graphics and system chipset in an updated computer that would also use the 1.6GHz, dual-core Atom 330. While slower in processing power than the existing Core 2 Duo, the mini system would have video performance close to if not exactly like the GeForce 9400M currently found in all aluminum MacBooks.
Apple OpenCL trademark
Apple has filed for a trademark on OpenCL technology, documents from the US Patent and Trademark Office show. The standard is intended to better distribute processing power on a computer, by using the normally segregated processing on a video card to help with tasks unrelated to graphics. Video cards can be extremely powerful, Apple notes, but only tend to maximize their use in certain applications, such as games.
PGI 8.0 compiler released
The Portland Group has released PGI 8.0, the latest version of its compiling and development suite for Mac, Linux and Windows software. The v8.0 edition introduces support for the OpenMP 3.0 multi-core programming standard, which lets coders exploit multiple cores in C or Fortran. Concurrent with this is the ability to create and debug OpenMPI apps in Linux and the Mac OS, and build and deploy multi-core apps to any "major" desktop or cluster OS.
NVIDIA Quadro FX 5800
NVIDIA on Monday established a new flagship video card for its workstation line. The Quadro FX 5800 is a major improvement over the earlier 5600 that brings 240 visual effects cores, or nearly double the old model's 128, but stands out as the first-ever video card with 4GB of onboard memory. The capacity allows for extremely large textures and geometry and also enables apps with CUDA general-purpose computing support to feed very large data sets: 4D modeling that factors in time is now more realistic.
NVIDIA, MotionDSP team up
NVIDIA on Thursday announced it has partnered up with digital video software creator MotionDSP to provide users with a way to enhance the quality of personal videos created by their cellphone cameras, point-and-shoot digicams, camcorders or video captured from the Internet. Written using NVIDIA’s CUDA general-purpose computing technology to take advantage of the GPUs’ abiilities to handle non-graphics tasks, MotionDSP’s codenamed Carmel software tracks every pixel from tens of video frames before reconstructing high-quality video from low-res sources.
NVIDIA CUDA 2.0 Final
NVIDIA today formally released the finished version of CUDA 2.0. The second generation of the company's general-purpose programming language for its video chipsets supports 64-bit versions of Mac OS X and Windows Vista and adds support for instructions that can help offload more specific tasks from the main processor to the video card, such as 3D textures and hardware-accelerated interpolation of information.
Apple adds CUDA dev kit
Apple has updated its download site with the latest version of the NVIDIA CUDA development kit. CUDA (Compute Unified Device Architecture) allows users to crunch mathematical formulas using the GPU resident inside a computer, speeding up otherwise lengthy tasks, such as encoding video or other rich media, as well as scientific and design uses. The kit will allow developers to tweak their code to run optimally on systems such as the MacBook Pro, and its GeForce 8600M graphics chipset.
NVIDIA Candidate for Mac
Apple's rumored non-Intel mainboard platform may primarily involve a change of suppliers to NVIDIA rather than any kind of custom development, PCPer suggests. The enthusiast site notes that Santa Clara, California-based NVIDIA has been developing its first nForce mainboard chipset for Intel-based notebooks, currently codenamed MCP79, with the aim of improving several weaknesses that have affected Intel's own designs and thus Apple as well. The architecture would support all the necessary components for Intel's just-announced Core 2 processors, including a 1,066MHz system bus and the option of DDR3 memory.
Apple to adopt CUDA?
Apple may be looking to adopt proprietary Nvidia code for Macs, an interview reveals. The latter company's CEO, Jen-Hsun Huang, has proposed that Apple is extremely interested in his company's CUDA programming language, which simplifies the coding process for Nvidia graphics hardware and can in theory improve performance. In February, an early Mac SDK for CUDA was released in the form of a beta.
Adobe CS4 GPU Use
(Update with corrections) The version of Photoshop included with Adobe's future Creative Suite 4 will include fuller acceleration both for dedicated video hardware as well as the first support for physics processing, TGDaily has been told as part of an early demonstration. While CS3 has already had limited support for graphics processing units (GPUs) for certain filters, the new version will use video hardware to improve performance across much of the image editor's pipeline. It will also enable new editing techniques: users can bring in a 3D image and paint it with changes applied immediately.
NVIDIA GeForce GTX coming?
NVIDIA plans on refreshing its GPU line-up next month with two new graphics card that will feature its next-generation CUDA-enabled graphics core, codenamed D10U. The company is expected to deliver both the GeForce GTX 260 (D10U-20) and GeForce GTX 280 (D10U-30) on June 18th as part of its summer refresh, according to DailyTech. The report claims that the GTX 260 will be a "significantly" scaled down version of the GTX 280, which will enable all 240 unified stream processors designed into the processor. These second-generation unified shaders perform 50 percent better than the shaders than its previous-generation offering, the company claims in its documentation; however, the new NVIDIA cards will only support GDDR3 memory and DirectX 10, while AMD focuses on faster GDDR5-based cards due early this summer.
Nvidia CUDA for Macs
Nvidia today announced the release of CUDA beta (12.9MB) for the Mac, a developer's kit that allows users to create derivative works for academic, commercial, or personal purposes. In addition to Nvidia's cards being used for gaming, some research firms use CUDA to do molecular modeling using parallel processing implementations, according to The Enquirer. The CUDA developers kit beta is available from Nvidia's website.
Nvidia to acquire Ageia
Nvidia today announced an official acquisition of Ageia Technologies, the company behind the PhysX software and hardware components. The acquisition will give Nvidia a physics element for its Cuda parallel processing systems. The PhysX technology is currently in use in many Xbox 360, Playstation 3 and Nintendo Wii games, as well as many gaming PCs worldwide. Nvidia will be hosting a quarterly conference call on February 13th to provide more information about the acquisition in its final stages.