You Can Google anything here !!!

Sunday, October 31, 2010

3-D Gesture Controls from Microsoft !!


Microsoft’s Newest Acquisition Is About 3-D Gesture Controls !!




Microsoft has acquired Canesta, the manufacturer of semiconductor chips capable of sensing movement and gestures in 3-D. The technology could be applied to everything from Windows 8 motion controls to its Xbox Kinect motion-sensing device.
Canesta, founded in 1999, specializes in the 3-D sensors that power “Natural User Interfaces.” A NUI doesn’t require inputs like a mouse or a keyboard for user to interact with a program or interface; it gets its commands from natural human gestures. While the fictional UI utilized in the 2002 film Minority Report is the best-known NUI in popular culture, Microsoft actually owns two popular ones: Microsoft Surface and Xbox Kinect.
Canesta has raised approximately $60 million in funding since its inception from investors including Carlyle Venture Partners, Venrock and Honda. The latter is hoping to use Canesta’s technology to help its cars detect and avoid obstacles. The financial terms of the Microsoft acquisition were not disclosed.
Microsoft utilizes 3-D sensing technology from competitor PrimeSense in its Xbox Kinect gaming system, according to The New York Times. Kinect launches on November 4.
Today’s deal may be more about Canesta’s intellectual property than it is about bringing more natural user interfaces to Microsoft’s products. Canesta is the owner of 44 different patents on 3-D sensing technology, processing algorithms and chip design. Having those patents handy will be useful in avoiding lawsuits as Microsoft experiments with even more NUIs.

Check this video : 


Its nice to watch , but where is our man Pranav Mistry ?? :(

3-D Gesture Controls from Microsoft !!


Microsoft’s Newest Acquisition Is About 3-D Gesture Controls !!




Microsoft has acquired Canesta, the manufacturer of semiconductor chips capable of sensing movement and gestures in 3-D. The technology could be applied to everything from Windows 8 motion controls to its Xbox Kinect motion-sensing device.
Canesta, founded in 1999, specializes in the 3-D sensors that power “Natural User Interfaces.” A NUI doesn’t require inputs like a mouse or a keyboard for user to interact with a program or interface; it gets its commands from natural human gestures. While the fictional UI utilized in the 2002 film Minority Report is the best-known NUI in popular culture, Microsoft actually owns two popular ones: Microsoft Surface and Xbox Kinect.
Canesta has raised approximately $60 million in funding since its inception from investors including Carlyle Venture Partners, Venrock and Honda. The latter is hoping to use Canesta’s technology to help its cars detect and avoid obstacles. The financial terms of the Microsoft acquisition were not disclosed.
Microsoft utilizes 3-D sensing technology from competitor PrimeSense in its Xbox Kinect gaming system, according to The New York Times. Kinect launches on November 4.
Today’s deal may be more about Canesta’s intellectual property than it is about bringing more natural user interfaces to Microsoft’s products. Canesta is the owner of 44 different patents on 3-D sensing technology, processing algorithms and chip design. Having those patents handy will be useful in avoiding lawsuits as Microsoft experiments with even more NUIs.

Check this video : 


Its nice to watch , but where is our man Pranav Mistry ?? :(

Friday, October 29, 2010

First humanoid robot into the space ever !!

Robonaut 2 !!!
              Robonaut is a humanoid robotic development project run from the Dextrous Robotics Laboratory at NASA's Johnson Space Center in Houston, TX. Robonaut is a different class of robot than other current space faring robots.
  While most current space robotic systems focus on moving large objects — such as robotic arms, cranes and exploration rovers — Robonaut's tasks require more dexterity.  
     The core idea behind the Robonaut series of robots is to have a humanoid machine work alongside astronauts. Its form factor and dexterity are designed such that Robonaut can use space tools and work in similar environments to suited astronauts.

The latest Robonaut version, R2, is slated to be delivered by Discovery Space Shuttle , on mission STS-133, to the ISS which is scheduled on 4th of November 2010  and subsequently tested "in-doors".Almost 200 people from 15 countries have visited the International Space Station, but the orbiting complex has so far only ever had human crew members – until now.


R2'S SPECIFICATIONS





Materials: Primarily aluminium with steel, and other nonmetallics
Weight: 23 1/2 stone
Height: 3 feet, 4 inches (from waist to head)
Shoulder width: 2 feet, 7 inches
Sensors: 350+, total 
Processors: 38 Power PC Processors 
Degrees of freedom: 42, total 
Speed: Up to 7 feet per sec

What is a Robonaut?
A Robonaut is a dexterous humanoid robot built and designed at NASA Johnson Space Center in Houston, Texas. Our challenge is to build machines that can help humans work and explore in space. 
Working side by side with humans, or going where the risks are too great for people, Robonauts will expand our ability for construction and discovery.Central to that effort is a capability we call dexterous manipulation, embodied by an ability to use one's hand to do work, and our challenge has been to build machines with dexterity that exceeds that of a suited astronaut. There are currently four Robonauts, with others currently in development. This allows us to study various types of mobility, control methods, and task applications. The value of a humanoid over other designs is the ability to use the same workspace and tools - not only does this improve efficiency in the types of tools, but also removes the need for specialized robotic connectors. Robonauts are essential to NASA's future as we go beyond low earth orbit and continue to explore the vast wonder that is space. 

Robonaut 2 or R2, will launch to the International Space Station on space shuttle Discovery as part of the STS-133 mission, it will become the first dexterous humanoid robot in space, and the first US-built robot at the space station. But that will be just one small step for a robot and one giant leap for robot-kind.In the current iteration of Robonaut, Robonaut 2 or R2, NASA and General Motors are working together to accelerate development of the next generation of robots and related technologies for use in the  automotive and aerospace industries. 


Robonaut 2 (R2) is a state of the art highly dexterous anthropomorphic robot. Like its predecessor Robonaut 1 (R1), R2 is capable of handling a wide range of EVA tools and interfaces, but R2 is a significant advancement over its predecessor. R2 is capable of speeds more than four times faster than R1, is more compact, is more dexterous, and includes a deeper and wider range of sensing. Advanced technology spans the entire R2 system and includes: optimized overlapping dual arm dexterous workspace, series elastic joint technology, extended finger and thumb travel, miniaturized 6-axis load cells, redundant force sensing, ultra-high speed joint controllers, extreme neck travel, and high resolution camera and IR systems. 

The dexterity of R2 allows it to use the same tools that astronauts currently use and removes the need for specialized tools just for robots.One advantage of a humanoid design is that Robonaut can take over simple, repetitive, or especially dangerous tasks on places such as the International Space Station. Because R2 is approaching human dexterity, tasks such as changing out an air filter can be performed without modifications to the existing design.Another way this might be beneficial is during a robotic precursor mission. 



Click this link to see the R2 in action !!!

R2 would bring one set of tools for the precursor mission, such as setup and geologic investigation. Not only does this improve efficiency in the types of tools, but also removes the need for specialized robotic connectors. Future missions could then supply a new set of tools and use the existing tools already on location.



First humanoid robot into the space ever !!

Robonaut 2 !!!
              Robonaut is a humanoid robotic development project run from the Dextrous Robotics Laboratory at NASA's Johnson Space Center in Houston, TX. Robonaut is a different class of robot than other current space faring robots.
  While most current space robotic systems focus on moving large objects — such as robotic arms, cranes and exploration rovers — Robonaut's tasks require more dexterity.  
     The core idea behind the Robonaut series of robots is to have a humanoid machine work alongside astronauts. Its form factor and dexterity are designed such that Robonaut can use space tools and work in similar environments to suited astronauts.

The latest Robonaut version, R2, is slated to be delivered by Discovery Space Shuttle , on mission STS-133, to the ISS which is scheduled on 4th of November 2010  and subsequently tested "in-doors".Almost 200 people from 15 countries have visited the International Space Station, but the orbiting complex has so far only ever had human crew members – until now.


R2'S SPECIFICATIONS





Materials: Primarily aluminium with steel, and other nonmetallics
Weight: 23 1/2 stone
Height: 3 feet, 4 inches (from waist to head)
Shoulder width: 2 feet, 7 inches
Sensors: 350+, total 
Processors: 38 Power PC Processors 
Degrees of freedom: 42, total 
Speed: Up to 7 feet per sec

What is a Robonaut?
A Robonaut is a dexterous humanoid robot built and designed at NASA Johnson Space Center in Houston, Texas. Our challenge is to build machines that can help humans work and explore in space. 
Working side by side with humans, or going where the risks are too great for people, Robonauts will expand our ability for construction and discovery.Central to that effort is a capability we call dexterous manipulation, embodied by an ability to use one's hand to do work, and our challenge has been to build machines with dexterity that exceeds that of a suited astronaut. There are currently four Robonauts, with others currently in development. This allows us to study various types of mobility, control methods, and task applications. The value of a humanoid over other designs is the ability to use the same workspace and tools - not only does this improve efficiency in the types of tools, but also removes the need for specialized robotic connectors. Robonauts are essential to NASA's future as we go beyond low earth orbit and continue to explore the vast wonder that is space. 

Robonaut 2 or R2, will launch to the International Space Station on space shuttle Discovery as part of the STS-133 mission, it will become the first dexterous humanoid robot in space, and the first US-built robot at the space station. But that will be just one small step for a robot and one giant leap for robot-kind.In the current iteration of Robonaut, Robonaut 2 or R2, NASA and General Motors are working together to accelerate development of the next generation of robots and related technologies for use in the  automotive and aerospace industries. 


Robonaut 2 (R2) is a state of the art highly dexterous anthropomorphic robot. Like its predecessor Robonaut 1 (R1), R2 is capable of handling a wide range of EVA tools and interfaces, but R2 is a significant advancement over its predecessor. R2 is capable of speeds more than four times faster than R1, is more compact, is more dexterous, and includes a deeper and wider range of sensing. Advanced technology spans the entire R2 system and includes: optimized overlapping dual arm dexterous workspace, series elastic joint technology, extended finger and thumb travel, miniaturized 6-axis load cells, redundant force sensing, ultra-high speed joint controllers, extreme neck travel, and high resolution camera and IR systems. 

The dexterity of R2 allows it to use the same tools that astronauts currently use and removes the need for specialized tools just for robots.One advantage of a humanoid design is that Robonaut can take over simple, repetitive, or especially dangerous tasks on places such as the International Space Station. Because R2 is approaching human dexterity, tasks such as changing out an air filter can be performed without modifications to the existing design.Another way this might be beneficial is during a robotic precursor mission. 



Click this link to see the R2 in action !!!

R2 would bring one set of tools for the precursor mission, such as setup and geologic investigation. Not only does this improve efficiency in the types of tools, but also removes the need for specialized robotic connectors. Future missions could then supply a new set of tools and use the existing tools already on location.



James Cameron on “Avatar 2″ and the Impending Environmental Crisis


AVATAR 2 !!!





On stage at a private event in Silicon Valley last night, legendary director James Cameron and Google CEO Eric Schmidt held a fascinating two hour conversation that touched on everything from the technology needs of the upcoming Avatar 2 film to the perils that face the environment if action isn’t taken.
Eric Schmidt, acting as moderator, questioned Cameron on a plethora of topics in front of an audience of Silicon Valley movers and shakers for the Churchill Club Premiere Event. The conversation started with a video highlighting Cameron’s decades of accomplishments, including Terminator, Rambo, Alien, Total Recall, Titanic and of course Avatar. It quickly moved into a conversation about how he created the most expensive and most profitable film in human history.
Cameron said that before he wrote the script for Avatar, he wrote the basics of the story and consulted with the artists. “Now my first step is to work with the artists,” Cameron told Schmidt on stage. He does this because he needs to see the characters and immerse himself in the art (primarily CG and Photoshop these days) before he can write a script, which is a far more specific document.
Schmidt then asked Cameron about the technology he used (and in some cases invented) to create the Na’vi and the world of Pandora (). The famous director described the motion capture technology used to capture the movements of the actors. Specific emphasis was paid on the facial capture rig that caught changes in an actor’s facial muscles, eyes and more. It wasn’t the rig itself that was groundbreaking, Cameron said, but the algorithms used to understand the actor’s emotions and facial movements.
As for Avatar 2 and Avatar 3, Cameron didn’t reveal any of the plot details (Schmidt asked for the plot, Cameron responded by asking for Google’s () source code). However, after the conversation, I asked the filmmaker what technologies he would have to invent in order to create both movies. While he mentioned that new CG would have to developed for Avatar 2’s underwater and ocean surface scenes, the real challenge he wants to tackle is increasing the frame rate of the sequel. Films are currently shot with a frame rate of 24 frames per second. His goal is to get it up to 48 or 60 frames per second, making it so that you get realistic shots at the time of shooting, rather than having to wait six months for editing.

Rewriting the Contract: 3D




 
James Cameron made an interesting point midway through the conversation — for work, many of us sit in front of our screens all day long. Yet when we want to relax… we watch screens. Sometimes we watch multiple screens.
The acclaimed director saw this and decided that he wanted to “find a fundamental way to rewrite the contract between humans and their visual media.” His tool of choice, as many of you know, is 3D.
Cameron sparked a new era of film with the spectacular 3D technology he created specifically for Avatar. The result has been a growing number of movies turning to 3D to enhance the movie experience.
He believes that there will be no barriers to 3D ubiquity in the next five to ten years. The first big breakthrough will be when it becomes mainstream in the homes. He pointed out that there are already millions of 3D-capable TV sets out in the market (many of which we saw at CES 2010); he says the real barrier to 3D going mainstream in the home is the lack of TV programming in 3D. Discovery and ESPN may be jumping into 3D, but we’re still years away from seeing The Big Bang Theory in three dimensions.
Cameron also believes 3D has to become a more comfortable experience to go mainstream. Google’s CEO took some time to explain the technology behind 3D glasses to the audience (polarized lenses help you see one image and then another in different view positions). Cameron believes we’re not far off from a time when we don’t need the glasses to watch 3D movies and TV shows. He says this will be especially be important for gamers, who can sit up to eight hours in front of a screen at a time. The Nintendo 3DS is already a step in that direction.

“We’re the Comet This Time”


The vast majority of the conversation turned towards ecological issues when Eric Schmidt described Avatar as a narrative about the world’s ecology. “Why do you care so much about it?” Schmidt asked Cameron. “What is your responsibility and why are you using your significant perch?”
“Any movie can be a teaching moment, but it has to be wrapped in powerful entertainment,” Cameron stated in response. He says part of the reason Avatar succeeded was that it spoke to the human psyche and heart. Specifically, it spoke to something he believes we all know: that we’re becoming disconnected from nature and that we are on a precipice.
“If we don’t take control over our stewardship of our planet,” Cameron began, “the planet we bequeath to our children and our grandchildren will be in significant danger.”
The next part of the conversation focused around the statistics supporting Cameron and Schmidt’s positions on the environment. They said that 70% of species will be extinct by the end of this century if we do nothing due to the rise of world temperatures. Both men pointed out that while an average temperature rise of a few degrees would be devastating, the temperature rise would be three times as great at the arctic and antarctic poles.
Cameron travels a great deal in order to bring awareness to his cause. He also intends to create several documentaries during the filming of Avatar 2 and Avatar 3 on the issue. He is also deeply involved in a project to create a vehicle that will reach the absolute bottom of the ocean, something that has been accomplished only once with a vehicle they described as a “gasoline-filled balloon.”
While they covered a lot of ground (more than I can reasonably type up), there was one quote that really summed up Cameron and Schmidt’s thoughts on our treatment of the environment. It was in reference to the comet that killed the dinosaurs.
“We’re the comet this time,” Cameron said.

James Cameron on “Avatar 2″ and the Impending Environmental Crisis


AVATAR 2 !!!





On stage at a private event in Silicon Valley last night, legendary director James Cameron and Google CEO Eric Schmidt held a fascinating two hour conversation that touched on everything from the technology needs of the upcoming Avatar 2 film to the perils that face the environment if action isn’t taken.
Eric Schmidt, acting as moderator, questioned Cameron on a plethora of topics in front of an audience of Silicon Valley movers and shakers for the Churchill Club Premiere Event. The conversation started with a video highlighting Cameron’s decades of accomplishments, including Terminator, Rambo, Alien, Total Recall, Titanic and of course Avatar. It quickly moved into a conversation about how he created the most expensive and most profitable film in human history.
Cameron said that before he wrote the script for Avatar, he wrote the basics of the story and consulted with the artists. “Now my first step is to work with the artists,” Cameron told Schmidt on stage. He does this because he needs to see the characters and immerse himself in the art (primarily CG and Photoshop these days) before he can write a script, which is a far more specific document.
Schmidt then asked Cameron about the technology he used (and in some cases invented) to create the Na’vi and the world of Pandora (). The famous director described the motion capture technology used to capture the movements of the actors. Specific emphasis was paid on the facial capture rig that caught changes in an actor’s facial muscles, eyes and more. It wasn’t the rig itself that was groundbreaking, Cameron said, but the algorithms used to understand the actor’s emotions and facial movements.
As for Avatar 2 and Avatar 3, Cameron didn’t reveal any of the plot details (Schmidt asked for the plot, Cameron responded by asking for Google’s () source code). However, after the conversation, I asked the filmmaker what technologies he would have to invent in order to create both movies. While he mentioned that new CG would have to developed for Avatar 2’s underwater and ocean surface scenes, the real challenge he wants to tackle is increasing the frame rate of the sequel. Films are currently shot with a frame rate of 24 frames per second. His goal is to get it up to 48 or 60 frames per second, making it so that you get realistic shots at the time of shooting, rather than having to wait six months for editing.

Rewriting the Contract: 3D




 
James Cameron made an interesting point midway through the conversation — for work, many of us sit in front of our screens all day long. Yet when we want to relax… we watch screens. Sometimes we watch multiple screens.
The acclaimed director saw this and decided that he wanted to “find a fundamental way to rewrite the contract between humans and their visual media.” His tool of choice, as many of you know, is 3D.
Cameron sparked a new era of film with the spectacular 3D technology he created specifically for Avatar. The result has been a growing number of movies turning to 3D to enhance the movie experience.
He believes that there will be no barriers to 3D ubiquity in the next five to ten years. The first big breakthrough will be when it becomes mainstream in the homes. He pointed out that there are already millions of 3D-capable TV sets out in the market (many of which we saw at CES 2010); he says the real barrier to 3D going mainstream in the home is the lack of TV programming in 3D. Discovery and ESPN may be jumping into 3D, but we’re still years away from seeing The Big Bang Theory in three dimensions.
Cameron also believes 3D has to become a more comfortable experience to go mainstream. Google’s CEO took some time to explain the technology behind 3D glasses to the audience (polarized lenses help you see one image and then another in different view positions). Cameron believes we’re not far off from a time when we don’t need the glasses to watch 3D movies and TV shows. He says this will be especially be important for gamers, who can sit up to eight hours in front of a screen at a time. The Nintendo 3DS is already a step in that direction.

“We’re the Comet This Time”


The vast majority of the conversation turned towards ecological issues when Eric Schmidt described Avatar as a narrative about the world’s ecology. “Why do you care so much about it?” Schmidt asked Cameron. “What is your responsibility and why are you using your significant perch?”
“Any movie can be a teaching moment, but it has to be wrapped in powerful entertainment,” Cameron stated in response. He says part of the reason Avatar succeeded was that it spoke to the human psyche and heart. Specifically, it spoke to something he believes we all know: that we’re becoming disconnected from nature and that we are on a precipice.
“If we don’t take control over our stewardship of our planet,” Cameron began, “the planet we bequeath to our children and our grandchildren will be in significant danger.”
The next part of the conversation focused around the statistics supporting Cameron and Schmidt’s positions on the environment. They said that 70% of species will be extinct by the end of this century if we do nothing due to the rise of world temperatures. Both men pointed out that while an average temperature rise of a few degrees would be devastating, the temperature rise would be three times as great at the arctic and antarctic poles.
Cameron travels a great deal in order to bring awareness to his cause. He also intends to create several documentaries during the filming of Avatar 2 and Avatar 3 on the issue. He is also deeply involved in a project to create a vehicle that will reach the absolute bottom of the ocean, something that has been accomplished only once with a vehicle they described as a “gasoline-filled balloon.”
While they covered a lot of ground (more than I can reasonably type up), there was one quote that really summed up Cameron and Schmidt’s thoughts on our treatment of the environment. It was in reference to the comet that killed the dinosaurs.
“We’re the comet this time,” Cameron said.

Mozilla Gives Firefox a New Add-On for Audio and Video Recording


Mozilla Labs has been working hard on browser-based audio and video — not just for playback, but also for recording. Labs’ newest creation, called Rainbow, lets developers access your hardware’s video and audio recording capabilities with a few lines of JavaScript.
The files created are all in open-source formats, including Theora, Vorbis and Ogg (support for WebM and other formats are planned in the product’s roadmap). Once media is captured, files can be accessed via the DOM with HTML5 File APIs.
Mozilla also wants to enable live streaming video capabilities for the add-on.
Mozilla Labs employee Anant Narayanan wrote in a blog post today that the Labs team had “experimented with audio recording in the browser as part of the Jetpack prototype.” This development, however, is still a pre-alpha prototype at the moment. As such, it only works with Firefox () nightly builds on Mac devices.
Another Mozilla experiment we’ve liked a lot lately is Chromeless, a DIY tool for developers who want to create their own web browser UIs.
In general, multimedia as part of the web browser experience is becoming increasingly experimental and interactive; we’re excited to see where Mozilla and developers take Rainbow in the near future. If you want to give it a whirl, you can check out the source on Github.

Mozilla Gives Firefox a New Add-On for Audio and Video Recording


Mozilla Labs has been working hard on browser-based audio and video — not just for playback, but also for recording. Labs’ newest creation, called Rainbow, lets developers access your hardware’s video and audio recording capabilities with a few lines of JavaScript.
The files created are all in open-source formats, including Theora, Vorbis and Ogg (support for WebM and other formats are planned in the product’s roadmap). Once media is captured, files can be accessed via the DOM with HTML5 File APIs.
Mozilla also wants to enable live streaming video capabilities for the add-on.
Mozilla Labs employee Anant Narayanan wrote in a blog post today that the Labs team had “experimented with audio recording in the browser as part of the Jetpack prototype.” This development, however, is still a pre-alpha prototype at the moment. As such, it only works with Firefox () nightly builds on Mac devices.
Another Mozilla experiment we’ve liked a lot lately is Chromeless, a DIY tool for developers who want to create their own web browser UIs.
In general, multimedia as part of the web browser experience is becoming increasingly experimental and interactive; we’re excited to see where Mozilla and developers take Rainbow in the near future. If you want to give it a whirl, you can check out the source on Github.

Thursday, October 28, 2010

The World's ultra-fast computing system designed in China !!!

The World’s Fastest Supercomputer Now Belongs to China !!!




A new supercomputer built in China is poised to take the number one spot in the twice-yearly Top 500 list of the world’s fastest supercomputers scheduled to be released in November.
Previously ranked seventh when the index was last released in June, China’s Tianhe-1A can now reach sustained performance levels of 2.507 petaflops43 percent faster than any other known supercomputer.


In fact, the Tianhe-1A was not even the fastest supercomputer in China in the last Top 500 list. That award went to the country’s Nebulae supercomputer based in Shenzhen, which recorded performance levels of 1.271 petaflops, a surprise second in the global index trailing only the U.S.-based Cray Jaguar system’s 1.75 petaflops.
The ultra-fast Tianhe-1A computing system, designed by China’s National University of Defense Technology and located at the National Supercomputing Center in Tianjin, generates its power from 7,168 Nvidia Tesla M2050 graphics processing units and 14,336 Intel chips. It has the computing power equivalent of 175,000 laptops and is three times more power efficient than current systems, according to Nvidia, which has already dubbed it “the fastest system in China and in the world today.”

In addition, the supercomputer has a theoretical performance of 4.669 petaflops when all its graphics processing units are operational, according to a Nvidia spokesman.
“I don’t know of another system that is going to be anywhere near the performance and the power of this machine,” said Jack Dongarra, a U.S. supercomputer expert who has overseen the Top 500 index since it was first established in 1993 and who inspected China’s new system in Tianjin last week. “It is quite impressive.”
China will utilize the Tianhe-1A as an “open access” system, available to other countries and organizations to use for large scale scientific projects and computations, according to Ujesh Desai, Nvidia’s vice president of product marketing.
Supercomputers are used for complex research simulations covering climate change modeling, genomics, seismic imaging, military design and code breaking.

Configuration :
GPU :  7,168 Nvidia Tesla M2050 GPUs
CPU :  14,336 Intel Xeon CPUs.
It cost $88 million;
Its 103 cabinets weigh 155 tons,
and the entire system consumes 4.04 megawatts of electricity.

Tianhe-1A ousted the previous record holder, Cray XT5 Jaguar, which is used by the U.S. National Center for Computational Sciences at Oak Ridge National Laboratories. It is powered by 224,162 Opteron CPUs and achieves a performance record of 1.75 petaflops.



Tianhe - 1A


Supercomputer




supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".
Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. As of May 2010, the Cray Jaguar is the fastest supercomputer in the world.
The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs,IBM Cell, FPGAs. Most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion). A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can.Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.

Supercomputer challenges, technologies

  • A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
  • Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason, hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1–5 microseconds to send a message between CPUs are typical.
  • Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-boundproblems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.
Technologies developed for supercomputers include:
  • Vector processing
  • Liquid cooling
  • Non-Uniform Memory Access (NUMA)
  • Striped disks (the first instance of what was later called RAID)
  • Parallel file systems.


The fastest supercomputers today


In general, the speed of a supercomputer is measured in "FLOPS" (FLoating Point Operations Per Second), commonly used with an SI prefixsuch as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) This measurement is based on a particular benchmark, which does LU decomposition of a large matrix. This mimics a class of real-world problems, but is significantly easier to compute than a majority of actual real-world problems.
"Petascale" supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaflops range. An exaflop is one quintillion (1018) FLOPS (one million teraflops).

Timeline of supercomputers




YearSupercomputerPeak speed
(Rmax)
Location
1938Zuse Z11 OPSKonrad ZuseBerlinGermany
1941Zuse Z320 OPSKonrad ZuseBerlinGermany
1943Colossus 15 kOPSPost Office Research StationBletchley ParkUK
1944Colossus 2 (Single Processor)25 kOPSPost Office Research StationBletchley ParkUK
1946Colossus 2 (Parallel Processor)50 kOPSPost Office Research StationBletchley ParkUK
1946UPenn ENIAC
(before 1948+ modifications)
5 kOPSDepartment of War
Aberdeen Proving GroundMarylandUSA
1954IBM NORC67 kOPSDepartment of Defense
U.S. Naval Proving GroundDahlgrenVirginiaUSA
1956MIT TX-083 kOPSMassachusetts Inst. of TechnologyLexingtonMassachusettsUSA
1958IBM AN/FSQ-7400 kOPS25 U.S. Air Force sites across the continental USA and 1 site in Canada (52 computers)
1960UNIVAC LARC250 kFLOPSAtomic Energy Commission (AEC)
Lawrence Livermore National LaboratoryCaliforniaUSA
1961IBM 7030 "Stretch"1.2 MFLOPSAEC-Los Alamos National LaboratoryNew MexicoUSA
1964CDC 66003 MFLOPSAEC-Lawrence Livermore National LaboratoryCaliforniaUSA
1969CDC 760036 MFLOPS
1974CDC STAR-100100 MFLOPS
1975Burroughs ILLIAC IV150 MFLOPSNASA Ames Research CenterCaliforniaUSA
1976Cray-1250 MFLOPSEnergy Research and Development Administration (ERDA)
Los Alamos National LaboratoryNew MexicoUSA (80+ sold worldwide)
1981CDC Cyber 205400 MFLOPS(~40 systems worldwide)
1983Cray X-MP/4941 MFLOPSU.S. Department of Energy (DoE)
Los Alamos National LaboratoryLawrence Livermore National LaboratoryBattelleBoeing
1984M-132.4 GFLOPSScientific Research Institute of Computer ComplexesMoscowUSSR
1985Cray-2/83.9 GFLOPSDoE-Lawrence Livermore National LaboratoryCaliforniaUSA
1989ETA10-G/810.3 GFLOPSFlorida State UniversityFloridaUSA
1990NEC SX-3/44R23.2 GFLOPSNEC Fuchu Plant, Fuchū, TokyoJapan
1993Thinking Machines CM-5/102459.7 GFLOPSDoE-Los Alamos National LaboratoryNational Security Agency
Fujitsu Numerical Wind Tunnel124.50 GFLOPSNational Aerospace LaboratoryTokyoJapan
Intel Paragon XP/S 140143.40 GFLOPSDoE-Sandia National LaboratoriesNew MexicoUSA
1994Fujitsu Numerical Wind Tunnel170.40 GFLOPSNational Aerospace LaboratoryTokyoJapan
1996Hitachi SR2201/1024220.4 GFLOPSUniversity of TokyoJapan
Hitachi/Tsukuba CP-PACS/2048368.2 GFLOPSCenter for Computational PhysicsUniversity of TsukubaTsukubaJapan
1997Intel ASCI Red/91521.338 TFLOPSDoE-Sandia National LaboratoriesNew MexicoUSA
1999Intel ASCI Red/96322.3796 TFLOPS
2000IBM ASCI White7.226 TFLOPSDoE-Lawrence Livermore National LaboratoryCaliforniaUSA
2002NEC Earth Simulator35.86 TFLOPSEarth Simulator CenterYokohamaJapan
2004IBM Blue Gene/L70.72 TFLOPSDoE/IBM RochesterMinnesotaUSA
2005136.8 TFLOPSDoE/U.S. National Nuclear Security Administration,
Lawrence Livermore National LaboratoryCaliforniaUSA
280.6 TFLOPS
2007478.2 TFLOPS
2008IBM Roadrunner1.026 PFLOPSDoE-Los Alamos National LaboratoryNew MexicoUSA
1.105 PFLOPS
2009Cray Jaguar1.759 PFLOPSDoE-Oak Ridge National LaboratoryTennesseeUSA



2010  Tianhe-1A   2.507 PFLOPS  National University of Defense Technology (NUDT) in China