11/04/2016

Headline November 05, 2016/ ''' *SILICON SAINTS* '''


''' *SILICON SAINTS* '''




DR. ALI FARHIDI  HOLDS A   puny $5 computer, called a Raspberry PI, comfortably in his palm and exults that his team of researchers has managed-

To squeeze into it a powerful program that can recognize thousands of objects. 

Scientist Farhadi, a  computer scientist at the *Allen Institute for Artificial Intelligence*   here calls his advance  ''artificial intelligence at your fingertips.''

This experimental program could drastically lower the cost of  artificial intelligence  and improve privacy because users would need to share information over the Internet.

But the  A.I. system is emblematic of something even more significant for the microelectronics industry as it inches closer to the physical limits of semiconductors made with silicon: It uses  1/32  of the memory and operates 58 times as fast as rival programs.

There is a growing sense of urgency feeding this part of research into alternative  computing methods.

For decades,  computer designers  have been able to count on the cheaper and faster chips every two years. As transistors have shrunk in size, at regular intervals,  computing has become both more powerful and cheaper at an accelerating rate   -a concept known as Moore's Law.

Two years ago, with manufacturing costs exploding and severe technical challenges growing the cost of individual transistors stopped falling. That has ended  -at least temporarily-  the ability of  computer makers to easily make new chips that are faster and cheaper.

*But if  silicon has its limits, ingenuity may not. Better algorithms and new kinds of  hardware  circuits could help scientists continue to make computers that can do more and at a lower cost*.

''It's been a fun ride,'' said Thomas M. Conte, an electrical engineer at the Georgia Institute of Technology. '
'Today you're entering this patchwork world where you are  going to find a better solution for a particular problem, and that's how we're going to advance in the future.''

This summer,  for example,  Intel acquired the  Nirvana Systems, a small maker of  specialized hardware  designed to run A.I. programs  more efficiently.

Last month,  researchers at  Argonne National Laboratory, Rice University and the  University of Illinois at   Urbana-Champaign   published research demonstrating how a programming technique for an  Intel microprocessor chip uses significantly less power to accomplish the same work.

The new approach is significant,  according to  supercomputer designers, because the high energy requirements of the fastest computers have become the most daunting challenge as  scientists try to move from today's  petaflop   -*a quadrillion computations per second.

Such computers are considered necessary to solve fundamental scientific problems like predicting the risk of climate change to the future of humanity.

Because of the slowdown in Moore's Law reasons technology writer John Markoff,  the arrival of exascale computing has repeatedly been pushed back.

*Though it was originally expected in 2018,  projections now set the next generation off as far as 2023*.   

The Argonne paper notes that a future supercomputer capable of an exaflop will multiply energy costs by a factor of thousand. 

To reduce those energy demands, the researchers demonstrated how they used a conventional Intel chip and turned off half its circuitry devoted to what engineers call mathematical precision. Then they  ''reinvented''  the savings to improve the quality of the computed result.

''Mathematical precision is like a knob that you can turn,'' said Krishna V. Palem,   a Rice university  computer scientist.
''The question is what you do with the saved energy.'' 
The researchers experimented with using the various  modes of the microprocessors in a manner similar to a gearshift in a car, automatically shifting from a higher to lower precision and back as needed to solve a problem.

''There is a lot to be done by thinking more carefully on how you can save energy,''  said Marc Snir, a veteran  Supercomputer designer and University of Chicago computer scientist.

The Honour and Serving of the latest  *Operational Research*  on computing  continues. Thank Ya all for reading and sharing forward. And see you on the following one.

With respectful dedication  to the Scientists, Students, Professors and Teachers of the world. See Ya all on !WOW!  -the World Students Society and Twitter !E-WOW!  -the Ecosystem 2011:


''' Pushing Silicon '''

Good Night and God Bless

SAM Daily Times - the Voice of the Voiceless

0 comments:

Post a Comment

Grace A Comment!