MIT Science and Technology Review 2020 “Top 10 Breakthrough Technologies in the World”

Today, the MIT Technology Review’s annual “Top 10 Breakthrough Technologies” (TR10) list is on schedule. Since 2001, the MIT Technology Review has named the year’s Top 10 Breakthrough Technologies, a list of global science and technology expertise that has accurately predicted the rise of such popular technologies as brain-computer interfaces, smartwatches, cancer gene therapy, and deep learning.

Of course, it is not so much “prediction” as the MIT Science and Technology Review, which stands at the forefront of global science and technology, and has witnessed a precipitation after the century-old change of scientific and technological innovation, is a feasibility analysis of scientific research towards industry, and a judgment on the commercialization and influence of technology.

As in previous years, this futuristic list attempts to define the next 10 technical directions at the beginning of each year, but the editorial board tries every year to create more freshness for this highly influential annual project. For example, last year, we invited Bill Shorten to Mr. Gates is a guest winner throughout the selection process.

So this year, standing in the new decade and the old decade of the relay point, The MIT Science and Technology Review this list will be what kind of surprise?

Looking back on the top 10 breakthrough technologies of 2013, thanks to the accelerated development of artificial intelligence, we included “deep learning” in that year’s breakthrough technology, and gave an accurate 3-5 year burst period.

In 2010, we nominated mobile phones with 3D holographic displays, microbes that convert carbon dioxide in the air directly into diesel, electronic implants that degrade in the human body, and “social television” (now called Twitter).

In 2009, before Apple acquired Siri, we listed it as a breakthrough technology. Siri announced at the time that it would not only be a voice-activated search engine, but also a search engine that could book restaurants or flights for users.

Figures . . . Gideon Lichfield, editor-in-chief of the MIT Technology Review

To be honest, if we can really predict exactly which new technologies will succeed, we won’t make them public, we’ll set up funds to invest in them. This is what venture capitalists do every day, and there are few successful people. But, as a futurist, futurism is not about guessing the future, but about challenging the imagination that exists so that we don’t get caught off guard when the future comes.

So in 2020 (everyone likes the year of the integer), we decided to expand this year’s top 10 breakthrough technology rankings. We’ll take a closer look at these scienceands and collect some of the others’ predictions for 2030.

Moore’s Law, for example, is the most reliable prediction of modern time, but it’s been a long time coming, and our colleague David Rotman has studied Moore’s Law. He made a prediction that Moore’s Law was about to fail. He’s curious about how this will affect future progress; independent journalist Rob Arthur studies why the 2016 U.S. presidential election is badly predicted and predicts the 2020 presidential election; and Brian Bergstein, a media personality, is concerned about researchers’ advances in developing AI that understands causation. Bobbie Johnson asked some futurists what they thought of the future and what they expected in 2030.

Figures . . . The 2020 MIT Technology Review’s list of the world’s top 10 breakthrough technologies includes: Anti-Black Internet, Ultra-Personalized Drugs, Digital Currency, Anti-Aging Drugs, Artificial Intelligence Discovery Molecules, Super Constellation Satellites, Quantum Superiority, Micro-Artificial Intelligence, Differential Privacy, and Climate Change Attribution.

Here’s how the list details:

Anti-black Internet

Significant: The Internet is increasingly vulnerable to hacking, and quantum networks will not be able to be hacked.

Key researchers: Delft University of Technology, Quantum Internet Alliance, China University of Science and Technology

Maturity: 5 years

The Internet, based on quantum physics, will soon achieve stable and secure communication. The team, led by Stephanie Wehner of Delft University of Technology, is building a network entirely through quantum technology that can connect four cities in the Netherlands. Messages sent over this network cannot be cracked.

Over the past few years, scientists have learned to use fiber-optic cables to transmit pairs of photons, a way to absolutely protect the transmission of encoded information in photons. A Chinese team led by Professor Pan Jianwei used the technology to build a 2,000-kilometer backbone network between Beijing and Shanghai, but the project relies in part on classic components that periodically disconnect the network before building a new quantum network, so there is still a risk of being hacked.

In contrast, delft Polytechnic University’s network will be the first to use quantum technology to transmit information between cities.

The technology relies on a particle behavior called “quantum entanglement.” Entangled photons cannot be secretly read without destroying their contents.

However, it is difficult to create entangled particles and it is more difficult to transfer particles over long distances. Wehner’s team has shown that they can send particles over a distance of more than 1.5 kilometers (0.93 miles), and they are confident that a quantum network between Delft and The Hague will be built around the end of the year. A longer-distance network connection will require quantum repeaters to extend.

Delft University of Technology and others are designing such repeaters. Wehner says the first quantum repeater will be completed in the next five to six years and will have a global quantum network by the end of 2020.

Ultra-personalized drugs

Significant: Genetic drugs tailored to individuals offer a glimmer of hope for terminally ill people.

Lead researchers: A-T Children’s Project, Boston Children’s Hospital, Ionis Pharmaceuticals, U.S. Food and Drug Administration

Maturity: Now

Let’s start by imagining a desperate case in which a child is suffering from an extremely rare and fatal disease. Not only does the disease not have an effective drug, but it can’t even be found by scientists trying to study it.

At last the doctor could only sigh, “The illness is so rare that he can go.” “

This could change thanks to new drugs that can be tailored to an individual’s genes. If a particular DNA defect causes an extremely rare disease (there are thousands of them), then there is at least one chance to fix the gene.

Little girl Mila Mila Makovec is a real case. She was devastated by a unique genetic mutation, and doctors then spent a year tailoring her drug for her genetic defect, named “Milasen”, taken from her name. The findings were published in the New England Journal of Medicine in October 2019.

Although the treatment has not yet completely healed Mila, it appears to have stabilized her condition: fewer seizures, and she can stand and walk with assistance.

Treatment for Mira is possible because the development of a new gene drug has never been faster and there has never been a better chance. The new drug may take the form of gene substitution, gene editing, or antonym nucleic acid, and Mila’s treatment is in the form of anantony nucleic acid, similar to the use of a molecular eraser to eliminate or repair incorrect genetic information. What these treatments have in common is that they can be programmed digitally and quickly to correct and compensate for genetic diseases, or to replace DNA letters.

How many people are in a similar situation to Mila? Not a lot at the moment, but it’s bound to be more and more.

The good news is that the challenge that once left researchers helpless is expected to turn around, and they hope to rely on DNA to find a solution.

However, the real challenge for this “many-to-one” treatment for a single patient is that it runs counter to almost all existing rules for the development, testing and sale of new drugs. Who pays for these drugs when they help only one person and need to be designed and manufactured by multiple large teams?

Digital currency

Significant: As the frequency of the use of real currencies decreases, so does the freedom of transactionwith without intermediaries. At the same time, digital money technology could be used to divide the global financial system.

Key researchers: Chinese Bank of China, Facebook

Maturity: 2020

In June 2019, Facebook launched a “global digital currency” called Libra. The move was immediately met with a backlash, and “Libra” may never be activated, at least not as originally envisaged.

But it still had an impact: within days of Facebook’s announcement, an official at Chinese Bank of China hinted that it would speed up the development of its own digital currency. Today, China is on track to become the first economy in the world to publish a digital version of its currency, replacing the real currency.

China clearly takes into account the potential impact of Libra: it could strengthen America’s disproportionate power in the global financial system, which stems from the dollar’s role as the world’s de facto reserve currency. Some have speculated that China will push its digital currency internationally.

Now, Facebook’s “Libra” campaign is political. In October 2019, Facebook CEO Mark S. Mark Zuckerberg, chief executive, promised congress that Libra “will expand America’s financial leadership and the ability of the United States to influence and oversee democratic values around the world.” The war on digital currencies has begun.

Anti-aging drugs

Significant: Many different diseases, such as cancer, heart disease and dementia, may be treated by delaying aging.

Principal researchers: United Biotechnology, Alkahest, Mayo Clinic, Ois?n Biotechnologies

Maturity: Within five years

The first wave of new anti-aging drugs has begun human testing. Although they are not yet able to extend their life span, they are expected to treat specific diseases by slowing or reversing the basic aging process.

These drugs, known as “longevity drugs,” work by eliminating certain cells that accumulate with age. These cells, also known as “senescent cells,” cause mild inflammation, inhibit normal cell repair mechanisms, and place adjacent cells in harmful environments.

In June 2019, San Francisco-based Unity Biotech announced preliminary test results for the drug in patients with mild to severe knee arthritis, and expects to receive more results from larger clinical trials in the second half of 2020. The company is also developing similar drugs to treat age-related eye and lung diseases.

Some natural processes of biological change are the root causes of aging and various diseases, and now Senolytics and many other promising new treatments for these processes are being tested in humans.

Alkahest, a biotech company, is trying to inject patients with certain components in young people’s blood, saying it promises to prevent cognitive and functional decline in people with mild and moderate Alzheimer’s disease. The company is also conducting human tests on drugs for Parkinson’s and dementia.

In addition, in December 2019, researchers at Drexel University School of Medicine are trying to develop creams containing rapamycin, an immunosuppressive drug, to see if they can slow skin aging.

The experiments reflect the ongoing efforts of researchers to see if many of the diseases associated with aging, such as heart disease, arthritis, cancer and dementia, can be used to slow their seizures by special means.

Artificial Intelligence Discovers Molecules

Significant: The commercialization of a new drug costs an average of about $2.5 billion, in part because it is difficult to find potential molecules to become a drug.

Principal researchers: Insilico Medicine, Kebotix, Atomwise, University of Toronto, Benevolent AI

Maturity: 3-5 years

The number of molecules that could be converted into potentiallife-saving drugs is unimaginable: researchers estimate that there are about 1,060, more than any other atom in the solar system, and offer almost unlimited chemical possibilities if chemists can find valuable molecules.

Machine learning tools can now be used to explore large databases containing known molecules and their characteristics, using this information to create new possibilities for finding new drug candidates at faster speeds and at a lower cost.

In September 2019, a team of inns from Insilico Medicine and the University of Toronto in Hong Kong made a major experimental breakthrough, demonstrating the effectiveness of the strategy by using several candidate drugs identified by synthetic artificial intelligence algorithms.

Using techniques related to deep learning and modelgeneration, similar to the one that allowed computers to beat world champions in ancient Go competitions, the researchers successfully identified about 30,000 new molecules with ideal characteristics. They selected six drug synthesis and testing, one of which showed high activity in animal experiments and proved promising.

Chemists in the field of drug discovery often think of a new molecule – an art for the best drug hunters, with years of experience and a keen intuition behind it – and now these scientists have new tools to further expand their imaginations.

Super Constellation Satellite

Significant: These systems can allow high-speed Internet to cover the world, or they can turn The Earth’s orbit into a minefield full of garbage.

Key researchers: SpaceX, OneWeb, Amazon, Telesat

Maturity: Now

More than 3.5 billion people in the world still have no access to the Internet. Companies like SpaceX and OneWeb believe they can launch thousands of satellites to form giant constellations of satellites that allow every inch of the planet to be connected to Internet terminals via broadband. As long as the skies above these terminals are not obscured, they can transmit the Internet to any nearby device. SpaceX alone has 4.5 times more satellites planned to go into orbit within a decade than the total number of satellites launched in human history.

Deploying these huge constellations is possible because we have learned how to build smaller satellites and launch them at a lower price. In the space shuttle era, the cost of launching a satellite into space was about $24,800 per pound. The cost of launching a small 4-ton communications satellite is close to $200 million.

Today, SpaceX’s Starlink satellite weighs only about 500 pounds (227 kg). Reusable design and low manufacturing costs mean we can launch dozens of satellites at a time with rockets, significantly reducing costs. Today, the cost of a SpaceX Falcon 9 rocket launch is only $1,240 per pound.

Last year, the first 120 Starlink satellites were launched, and the company plans to launch every two weeks starting in January 2020, with 60 satellites each. OneWeb has launched 34 satellites in February 2020 and says it will begin using services in some regional providers starting this year. We will soon see thousands of satellites working together to make the Internet accessto even the poorest and most remote parts of the planet.

But there are still some issues that need to be addressed if this is to be achieved. Some researchers from the academic community are unhappy with the plan, arguing that a large number of satellites can interfere with astronomy research. To make matters worse, so many satellites are moving in orbit, and in the event of a collision, it could trigger numerous collisions like an avalanche, resulting in thousands of pieces of space debris. Such a disaster would make it almost impossible for humans to use satellite services and explore space in the future. In September 2019, a Starlink satellite nearly collided with a weather satellite of the European Space Agency, scaring people in cold sweats to realize that we were not yet ready to maintain such a large number of satellites in orbit.

Over the next decade, the fate of these giant satellite constellations will determine the future of Earth’s orbiting space.

Quantum Superiority

Significant: Quantum computers will be able to solve problems that classic machines cannot solve.

Lead researchers: Google, IBM, Microsoft, Rigetti, D-Wave, IonQ, Zapata Computing, Quantum Circuits

Maturity: 5-10 years or more

Quantum computers store and process data in a completely different way than our common classical computers. In theory, they could solve some types of problems that even the most powerful classic supercomputers can solve for thousands of years, such as cracking passwords or simulating molecular precision behavior in new drugs and materials research.

Quantum computers have been around for years, but only under certain conditions can they surpass classical computers. A computer with 53 qubits (the basic unit of quantum computing) took just over three minutes to complete a computing task, which Google estimates could take 10,000 years, or 1.5 billion times longer, to complete the task using the world’s largest supercomputer. IBM has questioned Google’s claim that quantum computers can increase speed by up to a thousand times. Even so, it was a landmark boost. For every quantum bit that a quantum computer adds, it doubles the speed of its computing.

However, Google’s demo is just one proof of the concept of quantum computing, the equivalent of adding randomly on a calculator and proving that the answer is correct. The goal now is to build machines with enough qubits to solve real problems. This is a daunting challenge: the more qubits there are, the harder it is to maintain their delicate quantum states. Google’s engineers believe their method can reach 100 to 1,000 qubits, which may be enough to solve some practical problems, but no one knows exactly what they can do.

What about that? Quantum computers capable of cracking today’s cryptography will require millions of qubits, and it could take decades to achieve that goal. But it should be relatively easy to create a quantum computer model that simulates molecules.

Micro-Artificial Intelligence

Significant: Thanks to the latest artificial intelligence technology, our devices don’t need to interact with the cloud to make a lot of intelligent operations.

Key researchers: Google, IBM, Apple, Amazon

Maturity: Now

Artificial intelligence is developing a real problem: To build more powerful algorithms, researchers are using more and more big data and computing power and relying on centralized cloud services. Not only does this produce staggering carbon emissions, but it also limits the speed at which ai intelligence applications can run, and creates many privacy issues.

The rise of micro-artificial intelligence is changing that. Tech giants and academic researchers are exploring new algorithms to scale down existing deep learning models without incapacity. At the same time, a new generation of dedicated AI chips is expected to integrate more computing power into tighter physical spaces to train and run AI algorithms at lower power consumption.

These technological advances are benefiting consumers at large. Last May, Google announced that it could run Google Assistant on a user’s phone without sending a request to a remote server; starting with Apple’s iOS 13 operating system, Siri’s voice recognition and QuickType keyboard can be run locally on the iPhone; and IBM and Amazon now offer development platforms to make micro-artificial intelligence.

The benefits of micro-artificial intelligence are obvious. Existing services such as voice assistants, autocorrect and digital cameras will get better and faster, eliminating the need to connect to the cloud every time to run deep learning models; Localized artificial intelligence is more conducive to privacy protection because your data no longer needs to leave the device to evolve services or functions.

But as artificial intelligence technology becomes more widely available, the challenges it faces come with it. For example, cracking down on illegal surveillance systems or deep-fake videos may become more difficult, and discriminatory algorithms may proliferate. Researchers, engineers and policymakers now need to work together to conduct technical and policy reviews of these potential hazards.

Differential privacy

Significant: The U.S. Census Bureau’s data is becoming increasingly difficult to keep secret. However, a technique called differential privacy, which can build trust mechanisms and be used by other countries, can solve this problem.

Key researchers: U.S. Census Bureau, Apple, Facebook

Maturity: Its use in the U.S. 2020 census will be the largest to date.

In 2020, the U.S. government will have a major task: a census of 330 million U.S. residents, while keeping their identity data confidential. Policymakers and academics need to analyze the data when conducting legislation or research, and the law requires the Census Bureau to ensure that the data cannot be used to “position” any individual.

But there are still ways to “de-anonymize” personal data, especially if census data is integrated with other public statistics.

As a result, the Census Bureau added a “noise” to the data. It may change the age or ethnicity of some people, but it can keep the total number of age or ethnic groups unchanged. The more noise you add, the more difficult it is to de-anonymize your data.

Differential privacy is a mathematical technique that adds noise to data while consistently calculating the extent of increased privacy, making the process of increasing “noise” more rigorous. Apple and Facebook already use this method to collect aggregated data without having to identify specific users.

But too much noise can make the data useless. An analysis showed that a differential version of the 2010 census included families with an alleged 90-person population.

If it goes well, other federal agencies may also use this approach. Countries such as Canada and the UK are also looking at the technology.

Climate change attribution

Significant: It raises awareness of how climate change makes the weather worse and what we need to do to do so.

Principal researchers: World Climate Attribution Organization, Royal Netherlands Meteorological Institute, Red Cross and Red Crescent Climate Research Centre

Maturity: Now

Last September, ten days after Tropical Storm Imelda hit the Houston area, a rapid response team announced that climate change almost certainly played a role in the storm’s creation.

The World Climate Attribution team conducted high-precision computer simulations comparing the world’s differences in the event of climate change. The results showed that in the former (the world we live in), severe storms were 2.6 times more likely to occur (not occurring) and 28% stronger.

At the beginning of this century, scientists were reluctant to link any specific event to climate change. But in the past few years, scientists have done more extreme weather attribution studies, and rapidly improving tools and technologies have made research in the field more reliable and convincing.

First, more and more detailed satellite data records help us understand natural systems. In addition, improved computing power means that scientists can perform more accurate simulations and conduct more virtual experiments.

Coupled with a series of other technological advances, scientists have come to a statistically increasingly credible conclusion: global warming often exacerbates the occurrence of dangerous weather events.

By stripping the effects of climate change away from other factors, these studies tell us what risks we need to prepare for, such as how many floods are expected and how severe heatwaves are to occur as global warming increases. If we choose to listen, they can help us understand how our cities and infrastructure need to be transformed in response to a climate-changing world.