
How Badly Will Humanity Freak Out if We Discover Alien Life?  6 mins ago

OSIRISREx Sends Home an Image of the Earth and Moon  6 mins ago

Here’s How SpaceX is Planning to Recover Rocket Fairings: a Boat With a Net Called Mr. Steven  6 mins ago

Astronomers reveal secrets of most distant supernova ever detected  6 mins ago

Surfing complete  6 mins ago

Waterbeds simulate weightlessness to help Skinsuits combat back pain in space  6 mins ago

Amateur astronomer captures rare first light from massive exploding star  6 mins ago

Enrico Letta and Pascal Lamy urge European leaders to use a green tax shift to fix the EU budget  6 mins ago

NIH releases first dataset from unprecedented study of adolescent brain development  18 hours ago

Beginning of a new epoch has been confirmed with the help of the loneliest tree in the world  18 hours ago
Tackling intractable computing problems
Computers have proven capable of helping scientists solve many research problems, from the dynamics of black hole mergers to distant connections in reams of genomic data.
But there are some problems that computers cannot solve — at least not in a reasonable time frame.
The question concerning which problems can and can’t be solved efficiently, also known as “intractability,” are a great scientific challenge for computer scientists. One such question is one of the longeststanding unsolved problems in computer science, something called “P versus NP.”
This problem can be thought of as the question of whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer. Or, put another way: is it substantially harder to find a solution than it is to check that a given solution is correct?
(P vs. NP is one of the Clay Mathematics Institute Millennium Prize Problems, with a milliondollar prize to whomever can find a solution.)
Intractability — if it truly exists — means we might not be able to always find accurate, efficient algorithms for some important problems: from eliminating bugs from software programs, to knowing for sure if and when our encryption schemes are secure, to fully understanding how the brain works.
Limitations in these areas would have implications for privacy, economics and many scientific and societal problems.
Solving serious games
Supported by a $10 million National Science Foundation (NSF) Expeditions in Computing award, an interdisciplinary, multiinstitutional team led by Sanjeev Arora at Princeton University worked between 2008 and 2013, to find the sources of intractability, to circumvent intractability when possible using approximations and other means, and to understand the implications of intractability for other areas, from physics and biology to economics and information theory.
During that time, the team made several theoretical advances, including a new understanding of theUnique Games Conjecture, which is one of the approaches used to try to make progress toward understanding the N versus NP question.
The Unique Games Conjecture postulates that for many of the problems people would most like to solve, the current algorithms for finding a good approximation cannot be improved.
Arora, working with Boaz Barak of Microsoft Research (previously of Princeton) and David Steurer of Cornell University, showed that there is an algorithm for solving Unique Games that is in fact better: one that is faster than exponentialtime (where the time it takes to approximate the answer increases exponentially based on the data size) but still not as fast as polynomialtime (where the time to solution increases by a power of the data size). This means that some complex and seemingly intractable problems can be solved more efficiently than believed.
Their results were published in the October 2015 issue of the Journal of the ACM.
On the other hand, Subhash Khot, a theoretical computer scientist on the team who introduced the conjecture in the first place and subsequently won the 2010 Alan T. Waterman Award, made important progress towards proving the Unique Games Conjecture was in fact true. (There is still no consensus in the community regarding the truth of the Unique Games Conjecture.)
The team wrestled with a related realworld application as well: publickey cryptography, the means by which ecommerce and much of internet traffic is kept secure.
All current forms of publickey cryptography (such as SSL and SSH) use a handful of algebraic methods. However, emerging algorithms, both classical and quantum, could break them all. The team derived fundamentally new forms of publickey cryptography that might remain secure far into the future — a way of harnessing intractability to support cybersecurity.
Insights from the study of intractability were applied to other fields, too. For instance, it was shown that the task of evaluating the worth of a financial derivative, the type of collateralized debt obligation that had a key role in the financial collapse of 2008, is related to solving intractable problems. In fact, under widely held intractability assumptions, it is possible for a person with secret information to construct derivatives that are basically indistinguishable from a normal derivative — or more precisely, that distinguishing between the two requires a huge amount of computation — but are actually worthless.
“Some of the important leaps, for example in the understanding of efficient approximation, public key cryptography, arithmetic complexity and the role of intractability in other sciences, seem to happen mainly due to the collaborative environment of the Intractability Center, supported by the NSF Expedition,” said Avi Wigderson, a research team member from the Institute for Advanced Study. “I believe we proved that large groups can be effectively harnessed for theoretical projects.”
As an added benefit, the project served a training ground for theoretical computer scientists. According to Wigderson, “the project trained a couple of dozen students and postdocs, many of whom are young stars in algorithms and complexity theory.”
NSF Program Director Tracy Kimbrel agreed. “In addition to conducting worldclass research in theoretical computer science, the team also innovated in attracting new talent to theory via new summer programs for undergraduates and high school students and a highly successful workshop where established women in theory reached out to undergraduate and entering graduate student women,” Kimbrel said. “The role models at the ‘Women in Theory‘ workshops have been credited with inspiring numerous women to pursue studies in theoretical computer science.”
The researchers continue to build on the progress of the Expedition, taking the improved understanding of intractability in new directions.
New directions
Arora and colleagues at Princeton, for instance, are currently working on a further NSFsupported project to develop machine learning algorithms with provable guarantees about the quality of the solutions they provide or the time it will take to run the algorithm.
Such guarantees will be very important as machine learning algorithms are increasingly used for lifeordeath applications like medical decisionmaking. Finding such guarantees often involves designing fundamentally new algorithms, or proving the guarantees for existing algorithms are sound.
“It is fair to say that the ideas and young researchers generated by this project will continue to impact research for many years to come,” Arora said.
Source: NSF