Humankind 2.0
a book in progress...
Meditations on the future of technology and society...
...to be published in China in 2016
These are raw notes taken during and after conversations between piero scaruffi and Jinxia Niu of Shezhang Magazine (Hangzhou, China). Jinxia will publish the full interviews in Chinese in her magazine. I thought of posting on my website the English notes that, while incomplete, contain most of the ideas that we discussed.
(Copyright © 2016 Piero Scaruffi | Terms of use )
Nanotech: History, Trends and Future(See also the slide presentation)
Narnia: What is Nanotech and why is it important?
piero:
Technology used to be transparent. By "transparent" i mean that ordinary
people could easily understand what it did and how it did it. Think of a hammer:
it is not difficult to understand how the hammer works and how it pushes
the nail into the wood. Or think of the grammophone: you can see the grooves
in the record, and the needle that follows those grooves, and you can see the
loudspeaker, so you can guess that the needle picks up bumps in the grooves
and the loudspeaker amplifies them into the sounds that we hear.
Or think of the car: plenty of car lovers can easily tell you how the car
works when they open the hood and see the battery, the carbureteur,
the radiator, the cables connecting the engine to the accelerator pedal,
and the mechanical connections to the wheels.
Then technology became progressively more and more opaque. Ordinary people
have no idea of how images are broadcast from a tv station to the tv set in
their living room. You press a button and suddenly there are people speaking
in that box in your living room, and those people really exist somewhere.
Very few people can
explain how a computer works: you just know that you press a button, click on
a mouse, enter some characters on a keyboard, and the computer does things
for you.
Nanotech will push technology further away from ordinary people: it will
be largely invisible. For example, it will create a world of invisible nanobots
that communicate via an invisible "cloud". Things will happen without any
visible process but with very visible effects (and hopefully positive ones!)
Today children can at least still play with electronic gadgets, but tomorrow's
children will be surrounded by technology that they cannot see, touch, or break.
The first scientist to use the term
"Nanotechnology" was
the Japanese scientist Norio Taniguchi in 1974, but it was
Eric Drexler who popularized it with his
book "Engines of Creation - The Coming Era of Nanotechnology"
(1986). Drexler also founded the Foresight Institute in Menlo Park
with Christine Peterson.
"Nano" usually refers to technology that operates at the atomic and molecular
scale, 100 nanometers or smaller (a nanometer is one billionth of a meter).
To put this length in perspective, the size of an ant is 6 million nanometers.
Bacteria are 2,000 nanometers wide. But the scale of DNA is about 2 nanometers.
Progress in Nanotechnology was enabled by the invention of
the Scanning Tunneling Microscope (STM) in 1981 that allowed scientists to
work on individual atoms, and by the invention
of the Atomic Force Microscope in 1986.
It is still a very expensive field, that requires very expensive equipment.
The hope is that "molecular manufacturing" will become feasible on a larger scale.
At the end of 1959 at the annual meeting of the American Physical Society at the California Institute of Technology (Caltech)
the great physicist Richard Feynman gave a visionary speech that at the time was largely ignored: "There's Plenty of Room at the Bottom".
He said that it is physically possible to operate on single atoms.
He envisioned machines that "arrange the atoms the way we want",
In retrospect,
we can consider that speech as the "manifesto" of molecular manufacturing.
We want Nanotechnology that will allow us to pick one atom at a time and
placed them in the right place to form the object that we desire.
This is generally an almost impossible mission, but luckily there are
cases in which molecules self-assemble - they bind together spontaneosuly
in the right place - and then a new material is constructed "bottom-up".
Narnia: What does it mean for ordinary people?
piero:
Secondly, clean energy. Solar cells are among the first beneficiaries of
Nanotechnology.
Sustainable materials. For example, materials that can self-decompose.
Improved battery technology that will allow us to recharge a smartphone in
seconds instead of hours.
Biomedicine that will be more effective at curing diseases and injuries.
Faster computing that will reinvent computer architecture.
Narnia: Let's start with medicine. How can Nanotech improve our health?
piero:
Sometimes we can even find these nanorobots in nature. For example, in 2016 Sylvain Martel of the Montreal Polytechnic dropped millions of bacteria into the bloodstream of a mouse to deliver chemicals to its tumor. These bacteria use a magnetic field as a compass and multiply in places with low oxygen levels (tumors suck oxygen, and so they leave that part of the body deprived of oxygen). The scientist only needs to create a magnetic field in the region of the tumor and then the bacteria carry the drug there by following the magnetic field and by moving towards the region with low-oxygen level (The paper is "Magneto-aerotactic bacteria deliver drug-containing nanoliposomes to tumour hypoxic regions" in Nature Nanotechnology, 2016)
In 2016 Washington University School of Medicine in St Louis
(Rory Murphy and Wilson Ray)
and the University of Illinois at Urbana-Champaign (John Rogers' team)
published the results of a collaboration: they
built wireless brain sensors to monitor patients with traumatic brain injuries.
These nano-sensors travel through the body and are absorbed by the body.
No need for surgery to remove them.
Today we can already insert devices into bodies, but the problem is that
the body is prone to infections, and an infection can kill the patient even
years after the patient has been cured.
The new material for these electronic implants is a compound that dissolves in the body.
(The paper is "Bioresorbable Silicon Sensors for the Brain" in Nature, 2016)
In 2014 Rajesh Sardar at Indiana University designed a nano-sensor to detect changes in the concentration of microRNA modecules in the blood, an early warning of pancreatic cancer. (The paper is titled "Highly Specific Plasmonic Biosensors for Ultrasensitive MicroRNA Detection in Plasma from Pancreatic Cancer Patients", 2014).
In 2017 Olena Taratula's team at Oregon State University developed fluorescent nanoparticles that are activated by cancer cells and therefore pinpoint the cancer tissues to be removed by the surgeon (“A Tumor-Activatable Theranostic Nanomedicine Platform for NIR Fluorescence-Guided Surgery and Combinatorial Phototherapy").
Another medical application is under development at the
University of Colorado. One of the most pressing problems in medicine is that
we are not developing new antibiotics but bacteria keeps evolving,
so we have an increasing number of antibiotic-resistant bacteria such as Salmonella, E. coli and Staphylococcus, that each year infect two million people and kill at least 23,000 in the USA alone.
These are microbes that can evolve quickly, i.e. they can quickly become
resistant to the existing antibiotics.
Anushree Chatterjee and Prashant Nagpal are using
light-activated nano-particles to attack these
bacteria. (The paper is titled "Photoexcited Quantum Dots for Killing Multidrug-resistant Bacteria" in Nature, 2016)
Prashant Nagpal is the brain behind the nano-engineering. He manipulates
materials at the nanoscale to obtain new properties. For example,
he transformed some semiconductors into conductors that are as good as metals
(something that can improve solar cells),
he found ways to convert infrared radiation into electricity
(something that could lead to a new generation of solar panels),
and he invented "quantum molecular sequencing", a
method to sequence a person's genome by using only one molecule (instead of
a drop of blood or piece of skin, which contain many more molecules).
His laboratory is a good example of how many fields can benefit from
Nanotechnology.
Narnia: Hollywood films always show big robots, but it sounds like the future will have more "nano-robots" than big robots...
piero:
In order to create new materials, we need to build new molecular structures. In the old days chemists would work in a laboratory with strange containers full of strange substances. David Leigh at the University of Manchester wants to change that job. He wants to create the nano equivalent of the assembly line of a factory. To start with, the factory has robots that pick up objects and move them somewhere else. Leigh has built nanoscale robots that can pick up a single molecule and move it somewhere else. (The paper is "Pick-up, Transport and Release of a Molecular Cargo Using a Small-molecule Robotic Arm" in Nature magazine, 2015).
Narnia: Is there a lot of interaction between Nanotech and Biotech?
piero:
A nanoparticle can be a "Trojan horse" inside a cell.
For example,
in 2016 Howard Petty's team at the University of Michigan created a nanoparticle that kills a tumor cell in the eye by creating a sort of short circuit inside the cell's metabolism. (The paper is "WO3/Pt Nanoparticles are NADPH Oxidase Biomimetics that Mimic Effector Cells in Vitro and in Vivo" in Nanotechnology journal, 2016)
It is not surprising that there are interaction between Nanotech and Biotech,
but sometimes the application is surprising. If you want to create a
storage for data that will last for thousands and thousands of years,
you can just look at what Nature invented: the DNA. The DNA stores a lot
of information in a very small space, and, under the ideal circumstances,
it lasts for hundreds of thousands of years. Basically,
a fossil with well-preserved DNA is the longest-lasting data storage ever
invented on this planet, way before we invented computers.
Robert Grass at the Swiss Federal Institute of Technology (ETH) in Zurich
has created a prototype of these "synthetic fossils", and has
encoded into it Archimedes' ancient mathematical classic "The Methods of Mechanical Theorems" and Switzerland's constitution of 1291.
(The paper is "Robust Chemical Preservation of Digital Information on DNA in Silica with Error-Correcting Codes" in Angewandte Chemie, 2015).
Narnia: Which new materials have been created by Nanotech?
piero:
So far the biggest success story of Nanotech has come from England: in 2004 Andre Geim and Konstantin Novoselov at the University of Manchester isolated graphene (the technical term is "exfoliation": exfoliation of graphene from graphite).
This made graphene very popular in the scientific community.
Graphene is a one-atom thick layer of pure carbon, a material that is the lightest material known, the strongest material known (200 times stronger than steel), the best conductor of heat at room temperature and the best conductor of electricity known (capable of carrying electricity at a speed of 1 million meters per second).
We are also lucky that carbon is the fourth most abundant element in the universe (by mass) after hydrogen, helium and oxygen.
And carbon is the key ingredient of life on this planet, which means that
graphene should be an ecologically friendly, sustainable material.
Graphene is impacting so many fields, from batteries to semiconductors,
from bend-able electronics to solar cells.
In 2014 Kisuk Kang's team at Seoul National University designed an all-graphene battery.
Graphene Laboratories (now Graphene 3D Lab), founded in 2009 in New York by Elena Polyakova, is working on 3D-printed batteries made of graphene.
Hefei University of Technology is also a specialist in graphene electrodes for lithium-ion batteries.
The lithium-air battery remained a theoretical possibility until a discovery in 1996 by Kuzhikalail Abraham at EIC Laboratories in the Boston area that showed a practical way to build one. The appeal of this kind of battery is the amount of energy that it can store, which is ten times more than today's best batteries: a lithium-air battery is comparable to gasoline. Gasoline can store 13 kilowatt hours per kilogram and this kind of battery 12 Kwh/kg. For almost 20 years they remained difficult to build, until 2015 when Clare Grey's team at the University of Cambridge used graphene electrodes. We now have a good chance of seeing electrical cars powered by lithium-air batteries that would compete with gasoline cars.
Graphene can be used to build ultracapacitors with better performance than today's batteries and, last but not least, fast-charging batteries.
The so-called "Laser-Scribed Graphene (LSG) supercapacitors" are flexible and light batteries that recharge quickly. If they used
LSG supercapacitors, electronic devices would charge in seconds.
In 2008 the University of Cambridge and Nokia demonstrated a concept phone code-named Morph that was able to recharge itself via solar energy, but it was not
commercially feasible. Graphene might resurrect the dream of a rechargeable
phone because LSG supercapacitors can recharge much faster.
Graphene can be used to make
mobile phones that you can roll up and put in the pocket, or
TV sets as thin as wallpaper; in general,
bend-able electronic gadgets. Maybe we will reinvent the newspaper, except
that it will be a bend-able e-reader that we can fold away in the pocket
just like we used to do with newspapers 20 years ago.
Graphene can theoretically replace silicon in computer chips because
electrons in graphene move at higher speed compared to electrons in silicon.
The great revolution in screen technology has been the Light-Emitting Diode (LED). But a LED can emit light of only one color. In 2015 Tian-Ling Ren's team at Tsinghua University in Beijing used graphene to build the first LED that can be tuned to emit different colors of light.
Graphene can be used to create better solar cells.
In 2012 Zhenan Bao's team at Stanford replaced the traditional materials of the battery's electrodes with graphene and carbon nanotubes and this way they built the first solar cell made of carbon nanomaterials.
Michael Crommie, a scientist at the Materials Sciences Division at the Lawrence Berkeley National Laboratory and a professor of Physics at U.C. Berkeley, is working on solar cells the size of a single molecule (a single graphene nanoribbon).
In 2015 California has suffered one of the most severe droughts in its history.
Ironically, this state famous for technology ran out of water even if it has
a coastline of 1,350 kms. The reason is simple: it only had two desalination
plants. Now there are many more under construction, but the most common method
of desalination, reverse osmosis, consumes a lot of energy, so the solution
to desalination becomes a problem of energy production, just when California
was trying to reduce energy consumption.
The World Health Organization estimates that more than 2 billion people
don't have the amount of clean water they need, a fact that is indirectly
responsible for 2 million deaths a year, Most of these people live in countries
that have long coastlines.
The Japanese physicist Sumio Iijima first observed carbon nanotubes in 1991,
way before the discovery of graphene. A carbon nanotube is the equivalent of
a sheet of graphene rolled into a cylinder.
We have known since at least 2007 that carbon nanotubes can provide
an efficient method to filter seawater, thanks to the study of
Ben Corry at the University of Western Australia
(The paper is "Designing Carbon Nanotube Membranes for Efficient Water Desalination", 2008). A few years later
Jeffrey Grossman and David Cohen-Tanugi at the MIT showed that sheets of graphene would be hundreds of times more efficient than reverse osmosis.
(The paper is "Water Desalination across Nanoporous Graphene" in Journal of American Chemical Society, 2012).
Scientists at the Oak Ridge National Laboratory in Tennessee have perfected the method. (The paper is "Water Desalination using Nanoporous Single-layer Graphene" in Nature Nanotechnology, 2015).
Fuel cells could be a source of clean energy. A fuel cell, that looks like
a traditional battery with two electrodes,
generates electricity from a simple chemical reaction:
converting hydrogen into water by combining it with the oxygen of the air.
This reaction generates a little bit of electricity
between the two electrodes.
To increase the amount of electricity, the process must be improved by
a catalyst, i.e. the electrodes must be coated with that catalyst.
Traditionally, the catalyst was platinum, which is an expensive material.
Yanguang Li and Hongjie Dai at Stanford found an alternative to platinum: carbon nanotubes. (The paper is "An Oxygen Reduction Electrocatalyst Based on Carbon Nanotube-graphene Complexes" in Nature, 2012)
Graphene is also more "biocompatible" than most materials, meaning that
it doesn't cause damage inside the body. Experiments at the University of Trieste in Italy show that graphene electrodes can be implanted safely in the brain. (The paper is "Graphene-Based Interfaces Do Not Alter Target Nerve Cells" in ASC Nano, 2015).
Graphene-based foams are ultra-light materials. In 2013 Zhejiang University announced the invention of Graphene Aerogel, the lightest material ever made, and another ultralight graphene-based foam was created in 2014 at Rice University by Pulickel Ajayan's team.
The applications of graphene and carbon nanotubes are virtually endless.
In 2013 a Stanford team led by Subhasish Mitra and Philip Wong built the first carbon nanotube computer. (The paper is "Carbon Nanotube Computer" in Nature, 2013). Unlike graphene, which is always a conductor, carbon nanotube can be a semiconductor. Unfortunately, this computer was made of fewer than 200 transistors and it only offered a clock speed of 1 kilohertz. In 2015 the same team presented a vastly improved technique. Their main competitors are at IBM in New York state. In 2015 Wilfried Haensch's group at IBM showed a carbon nanotube transistor that further improves that technology (17 years after IBM made one of the first carbon nanotube transistors in the world).
There are other types of nanomaterials: zero-dimensional nano-particles, one-dimensional nanowires, and three-dimensional networks. But physicists are obsessed with two-dimensional nano-sheets like graphene because they have unique properties (mechanical flexibility, electrical conductivity and optical transparency) which are ideal for the manufacturing of electronic and photonic devices. Another two-dimensional nano-sheet is MoS2, studied since 2010 by Tony Heinz, first at Columbia University and now at Stanford.
Narnia: Are there are other "magical" materials besides graphene?
piero:
Since 2009 British-based firm P2i has developed a coating that can be spread over electronics to repel water. In 2012 similar coatings were introduced by California-based Liquipel and Utah-based HzO.
An even broader class of liquids is repelled by the material invented in 2013 by Anish Tuteja's team at the University of Michigan. (The paper is "Superomniphobic Surfaces for Effective Chemical Shielding" in the Journal of the American Chemical Society, 2013).
Nanotechnology is now trying to engineer materials that we will not need
to clean, "self-cleaning" materials, materials that are always clean.
The inspiration comes from Nature.
The lotus flower is loved all over Asia because it is so clean. And, still,
you find it mostly in very muddy swamps. Botanists have studied how its
leaves can remain so clean and discovered its leaves are made of a material
that cleans itself; or, better, a material that cannot be stained, so that
raindrops will remove any "dirt" that falls on them.
The principle of self-cleaning was discovered in 1973 by the German botanist
Wilhelm Barthlott.
Unforunately, 40 years later we still haven't found a way to match Nature.
The lotus effect remains a matter of laboratory research.
One promising material that comes close to the "lotus effect" is
titanium dioxide
(which, by the way, is often an ingredient in sunscreen lotions). Its
properties were publicized in 1967 by the Japanese scientist Akira Fujishima.
This scientist built his own house with self-cleaning exterior walls.
Since then, titanium dioxide has become the ingredient of many sprays that
clean smooth surfaces.
This material is both photocatalytic (activated by light) and hydrophilic
(it loves water). This means that, as long as their is light (UV light, to
be precise), rain spreads uniformly across the surface and works like
a rag that wipes the surface.
Nanotechnology inserts nano-particles of
titanium dioxide directly into the surfaces of objects.
Many new buildings have "self-cleaning windows" because they have a
10 nanometer coating of titanium dioxide.
These "self-cleaning windows" tend to deteriorate quickly, but there is
progress every year. For example, in
2015 Yao Lu at London's University College, in collaboration with London's Imperial College and China's Dalian University of Technology,
unveiled a more durable paint made from coated titanium dioxide nano-particles.
(The paper is "Robust Self-cleaning Surfaces that Function when Exposed to Either Air or Oil" in Science magazine, 2015)
The days of dirt-repelling and self-cleaning materials are not too far.
Future generations may not know that there was a time when things had to
be protected from stains and cleaned periodically.
Christina Lomasney, a former physicist at the University of Washington who is now the founder of Modumetal in Seattle, has invented another class of materials, nanolaminated metals, that, unlike traditional metals, can be produced by using electricity, not heat.
The list is endless. A few years ago Tobias Schaedler at the Hughes Research Laboratories of Los Angeles developed an extremely lightweight metallic material that is made 99.99% of air ("Ultralight Metallic Microlattices", 2011), which at the time was the lightest material ever made. Xiang Zhang at the Lawrence Berkeley National Laboratory built an invisibility cloak by covering the object with a tiny sheet of nano-antennas that redirects light waves ("An Ultrathin Invisibility Skin Cloak For Visible Light", 2015). Swedish researcher Lars Berglund created transparent wood: he eliminated "lignin" from the wood, the chemical substance that makes wood opaque ("Optically Transparent Wood from a Nanoporous Cellulosic Template: Combining Functional and Structural Performance", 2016).
Quantum dots are semiconductor nano-particles that are very small (10,000 times smaller than a human hair) but very powerful. They can enhance the colors of a television screen. Samsung has pretty much abandoned OLED displays for quantum-dot displays, and Amazon has used quantum dots in its Kindle Fire HDX.
Nanotechnology also allows us to think differently. For example, we always thought of making homes warmer: insulate the walls and the roof, use electricity or gas for heating. What if instead we made clothes warmer? Yi Cui at Stanford University is working on silver nanowire that protects from the cold and even generates its own heat. If he finds a way to dye fabric with this material, we will soon be buying self-heating sweaters. Cui points out that half of the energy produced by the world is used for heating buildings, and that generates a third of the world's greenhouse gas emissions. Using similar principles, scientists may come up with a self-cooling clothes for very warm places.
Graphene has competitors.
In 2014 Julia Greer at Caltech created a ceramic that, too, is exceptionally strong and light-wight ("Strong, Lightweight, and Recoverable Three-dimensional Ceramic Nanolattices" in Science magazine, 2014).
In 2015 Xiaochun Li's team at UC Los Angeles has created a super-strong metal that is also very light. This is the kind of material that could help us build not only lighter airplanes but also lighter spacecrafts. The number one problem for anything that has to fight gravity is its own weight. (The paper is "Processing and Properties of Magnesium Containing a Dense Uniform dispersion of Nanoparticles" in Nature, 2015).
Graphene is a two-dimensional nano-sheet that occurs naturally. Xudong Wang's team at the University of Wisconsin studies two-dimensional nano-sheets (whose thickness is just a few atoms) that don't exist in nature. (The most recent paper is "Nanometre-thick Single-crystalline Nanosheets Grown at the Water-air Interface" in Nature Communications, 2016).
I heard that there are now more than 500 two-dimensional materials, just
a decade after the "discovery" of graphene.
Many of these new materials don't even have a name.
Gerbrand Ceder has launched the Material Project at UC Berkeley to catalog all
materials and their features, a sort of genome of each material, so that it
will be easy to find the material you need based on the properties you require.
We are on the verge of a major revolution in materials, and this revolution
will fuel the revolution in consumer electronics, Biotech, Internet of Things
and space exploration.
Xtalic, an MIT spinoff, has created materials that can replace gold. Singapore-based Ila Technologies can make diamonds in the laboratory.
The problem remains the same: it is extremely difficult and expensive to
create these new materials, so the scientists only create very tiny quantities
to study their properties. We still need to find a way to create new materials
in a simple and efficient way.
There is a chemist, Chad Mirkin, at Northwestern University near Chicago,
the director of the International Institute for Nanotechnology,
who in 1996 pioneered a way to create new materials.
He became famous for the paper "A DNA-based Method for Rationally Assembling Nanoparticles into Macroscopic Materials" in Nature magazine, 1996) and today
he is one of the most cited chemists in the world.
Mirkin used a combination of gold and DNA to create a new material.
It is interesting that the DNA (the typical double-helix structure)
was used to "bind" the nano-particles of gold.
He has spent 20 years improving that idea.
In 2015 Chad Mirkin created a new material that can change shape. His technique
allows the same nano-particles to assemble in more than 500 different forms.
Basically, he has invented a material made of "reprogrammable" particles.
Some scientists call it "pluripotent matter", a material that can transform
itself into different materials. (The paper is "Transmutable Nanoparticles with Reconfigurable Surface Ligands" in Science magazine, 2016)
Narnia: Superconductivity is a state of matter in which electrons flow without resistance. This is achieved at absolute zero temperature, but much harder to achieve at normal temperatures. This complicates practical applications (like magnetic-levitating trains) and makes them very expensive. MRI machines used in hospitals are another example of the application of superconductivity because they use superconduting magnets, but the magnets of MRI machines must be cooled all the time, which explains why an MRI scan is so expensive. Can Nanotech create superconductors that work at room temperature?
piero:
It is hard to determine how much progress is being made.
In 2014 Chris Pickard's team at the London Centre for Nanotechnology and Zhi-Xun Shen's team at Stanford University proposed a way that graphene could become a superconductor, but it is too early to say whether this method will work.
In 2014 Mikhail Eremets' team at the Max Planck Institute in Mainz (Germany) achieved superconductivity at higher temperature than absolute zero (-70 degrees
celsius, which, relatively speaking, is almost "room temperature") by using a hydrogen-sulfur compound (The paper is "Conventional Superconductivity at 203 K at High Pressures" in Nature, 2015).
In 2015 the same Eremets showed that phosphine (a hydrogen-phosphorus compound) was also a promising material.
In 2015 Kosmas Prassides' team at Tokohu University in Japan created a carbon-rubidium compound (so-called "Jahn-Teller metal") that behaves as an insulator,
a superconductor, a metal and a magnet, all at the same time. (The paper is "Optimized Unconventional Superconductivity in a Molecular Jahn-Teller Metal" in Science Advances, 2015).
The teams of Fan Zhang at the University of Texas in Dallas and Yugui Yao of the Beijing Institute of Technology are also working on room-temperature superconductivity.
These are all very experimental ideas. In practice, places like SLAC (Stanford Linear Accelerator Center) still use iron-based materials.
In 2006 Hideo Hosono at the Tokyo Institute of Technology discovered the first iron-based high-temperature superconductor ("Iron-Based Layered Superconductor"). Maw-Kuen Wu's team in Taiwan caused a sensation in 2008 when they discovered that a chemical compound called iron selenide (chemical symbol: FeSe) can be a high-temperature superconductor ("Superconductivity in the PbO-type Structure alpha-FeSe", 2008). FeSe is easier to manufacture than other superconducting materials.
Over the last few years, scientists have turned to lasers in order to achieve
higher superconducting temperatures.
In 2014 Andrea Cavalleri's team at the Max Planck Institute in Hamburg (Germany) used lasers to achieve superconductivity
at room temperature... but only for 2 trillionths of a second, i.e.
0.000000000002 seconds;
and in 2016 the same team was successful again, using this time a "fullerene"
molecule. A fullerene is very similar to graphite.
When it is in a cylindrical shape, it is a carbon nanotube.
They warmed up this superconducting fullerene to 103K, but
only for a fraction of a second.
In 2018 Pablo Jarillo-Herrero at MIT proved that graphene becomes superconducting in at least one specific case: when two sheets of graphene are pressed together at a specific angle, an angle discovered in 2011 by Allan MacDonald at the University of Texas at Austin, an angle at which the electrons in the sheets become strongly correlated with one another. The misalignment of the two graphene sheets yields intriguing properties and so this "twisted bilayer grapheme" has become one of the most studied materials in the world.
I always wonder what would happen when superconductors at room temperature
become common materials. Scientists don't realize that it would also imply
an environmental catastrophe: imagine the garbage dumps of the world with
piles and piles of our tv sets, computers, phones, transformers and all the
electrical devices that exist today in the world. If they invent superconductivity at room temperature, invest in a recycling company!
Narnia: Moore's Law (that the speed of processors increases every 18 months) has been correct for 50 years, but many now think that it is coming to an end. Can Nanotech help sustain Moore's Law?
piero:
Moore's Law is the reason that we have had so much change in the devices
that we use. Almost every decade our computers have completely changed:
the mainframe in the 1960s, the minicomputer in the 1970s, the personal
computer in the 1980s, the portable computers in the 1990s, the smartphone in
the 2000s, and now the embedded processors for the Internet of Things.
We assume that ten years from now the world will be completely different
because of a new generation of computing devices, but what happens if
Moore's Law stops working? Will there be a new generation of computing devices?
The implications are colossal.
The honest truth is that Moore's Law started failing in 2005,
when Intel and AMD introduced their first "dual-core" processors.
The original Moore's Law was about the number of electronic components
that could be squeezed into an electronic chip. In the 2000s we started
talking of Moore's Law as a law about the computational power of chips.
Intel's Xeon Haswell-EP of 2015 boasted 5.5 billion transistors but...
thanks to 18 cores. The original microprocessor was basically a computer
on a chip.
A multi-core processor is like putting many computers on one chip.
The cost per transistor has actually been rising since the
Taiwan Semiconductor Manufacturing Company (TSMC)
introduced its 28-nanometer (28nm) chips in 2011.
In fact, since 2012 Intel has been using a different kind of transistor,
the "tri-gate" transistor. The rest of the world calls it "FinFet" transistor.
Chenming Hu invented them at UC Berkeley in 1998, and one of his students,
Yang-Kyu Choi, founded
the Nanotech lab at the Korea Advanced Institute of Science and Technology
(KAIST)
that set one record after the other in FinFets.
In 2015 Intel started shipping the 14nm Skylake processor (400,000 times more
powerful than the Intel 4004), but in that year
Intel also announced that its 10nm Cannonlake processor would be delayed to
2017. The "nanometer" scale tells you how far apart the transistors are on
the chip. The first microprocessor, the Intel 4004, contained
2,300 transistors spaced by 10,000nm gaps.
It is just getting too difficult and too expensive to operate at that
scale. A Skylake transistor is made of about 100 atoms.
If they continue shrinking, within a decade we should have
2nm technology, and, since an atom's diameter is about 0.2 nanometers, that
would mean transistors that work in a space of just 10 atoms!
This is technically feasible (in fact, Yang-Kyu Choi's team at the KAIST already built a 3nm FinFET in 2006), but extremely expensive.
Today the cost of building a factory for today's microprocessors is already
in the billions of dollars.
In fact, in 2016 one of Intel's executive vicepresidents, William Holt,
openly admitted that Intel does not plan to use silicon below the 7nm threshold.
This is not surprising because in 2014 IBM had announced that it was investing $3 billion into "post-silicon" computer technology, and specifically mentioned the 7nm limit.
That's when Silicon Valley will stop being "silicon".
Holt mentioned "spintronics" as one possible alternative to today's
"electronics". The spin is a physical property of particles like the electron.
The spin's state in the electron is either up or down, a two-system state
that can be easily related to one and zero.
There is still hope for silicon though if scientists manage to couple it with light. A way to improve the performance of computers is to keep the silicon transistors but use light to transfer information. This should yield faster speed and lower consumption. We use fiber-optic cables to transport the data of the Internet around the world, but we still use copper wires to transport the data from one circuit to another circuit on the same chip. Fiber-optics is faster because it is light, but it is difficult to shrink it.
In 2012 IBM announced a chip wired with both electrical and optical connections and in 2015 presented a much improved version.
In 2015 Rajeev Ram at the MIT announced that his team (in collaboration with UC Berkeley) built such an "optoelectronic" processor.
By the way, Moore's Law for the hardware is dead, but there is a parallel law
that doesn't have a name for software. People don't talk enough about the fact
that the price of software has been falling exponentially.
In fact, most apps are now free: we went from software that cost millions of
dollars in the 1970s to $0.
It is getting more and more difficult to achieve faster speeds without
generating excessive heat.
Increasing the clock speed of a chip also increases its electrical
consumption which also increases the heat it generates.
In other words, they they generate more heat if you squeeze more silicon
components into a tiny space.
You can built a cheap, small and powerful chip, but that chip is useless if it requires an expensive cooling mechanism.
The way Intel and the other semiconductor giants have solved the problem is
to squeeze multiple processors on the same chip.
Nanotechnology opens the possibility to invent nanocircuits that will not
have that problem. It has been known for a decade that
graphene "nanoribbons", introduced theoretically by Mitsutaka Fujita in 1996,
could replace silicon semiconductors and provide much higher transistor
density and clock speeds. The problem is to manufacture them.
Paul Weiss at UCLA, Felix Fischer at UC Berkeley and Michael Arnold at the Univ of Wisconsin are experimenting with methods to improve the production of graphene nanoribbons.
Even if Intel and the others found a way to cool down the circuits, those
circuits are approaching the size of a few atoms, smaller than most viruses.
A little smaller and electronic circuits will start experiencing quantum
effects that will make them unreliable.
If progress in the speed of microprocessors stops, the consequences will
be very serious for many fields, including Artificial Intelligence, where
progress has mainly come from "brute force", from more and more powerful
processors. However, this will not be the first time that progress stopped.
Think of airplanes. Today's airplanes fly at the same speed of the airplanes
of the 1960s, and, in fact, they fly slower than the Concorde that doesn't
exist anymore. It has not stopped people from flying. And it has not stopped
aircraft manufacturers from producing better airplanes. "Faster" is not
always "better". Most people are not interested in super-fast computing but
in batteries that will last longer on their smartphones. A smartphone has
to absorb signals from Wi-Fi, Bluetooth and GPS, and at the same time
broadcast its location, show videos and recognize when your fingers touch
the screen. Today this requires a lot of electrical power. We need progress
also in reducing the power that all these functions consume.
The huge costs of making chips has forced companies to merge into conglomerates.
Today the semiconductor market is dominated by very few conglomerates:
Intel, Samsung, TSMC and i don't know who else (Qualcomm, AMD and many others
sell chips but those chips are manufactured by foundries in Asia).
The situation is very similar to the situation in the airplane and car
industries. It is difficult to imagine a major revolution from these big
bureaucracies in the way that airplanes, cars or electronic chips are made.
But new materials could solve the problem. Graphene is always top of the list,
but not the only hope. There are many two-dimensional materials that are
being made and studied around the world for the purpose of replacing silicon.
The problem with graphene is that it conducts too well. Most scientists would
prefer to find another semiconductor like silicon.
Since 2010, when Andras Kis at the Federal Institute of Technology in Lausanne
built the first transistor with it, the material called TMDC
(transition-metal dichalcogenide) has been a candidate to replace silicon.
In 2016 Madhu Menon's team at the Center for Computational Sciences in Britain discovered a new material that is one-atom thick like graphene, but it is a semiconductor like silicon. This new material is made up of three elements that are easily available on our planet: silicon, boron and nitrogen. (The paper is "Prediction of a new Graphene-like Si2BN Solid" in Physical Review B, 2016)
Silicon is still a candidate for the electronic circuits of the future, except that it could be used in a very different way: to transport light instead of electrons. The fastest way to transport information is optical. A fiber-optic cable has a much broader bandwidth than a copper cable, therefore fiber-optic cables are used to transmit large amounts of data over large distances. But not inside an electronic chip. Inside an electronic chip the connections are made of copper. The reason is that we cannot confine broad bandwidth into the nano-size of an electronic chip. We can do it with copper wires, but not with optical fiber. It is difficult to shrink the wavelength of light. Saman Jahani at the University of Alberta in Canada and Zubin Jacob at Purdue University have found a way to do it with transparent silicon-based metamaterials that they created in their laboratories. Some day computers may be made of silicon-based photonic circuits. (The most recent paper is "Overview of Isotropic and Anisotropic All-dielectric Metamaterials" in Nature Nanotechnology, 2016).
Doug Barlage's team at the University of Alberta has developed a new kind of transistor, an evolution of the old MOSFET transistor invented in 1959 at Bell Labs, that could be used to build very thin and bend-able electronic devices. (The paper is "Sustained Hole Inversion Layer in a Wide-bandgap Metal-oxide Semiconductor with Enhanced Tunnel Current" in Nature Communications, 2016)
Narnia: Are there other ways in which Nanotech can improve computers?
piero:
Today's computers use a kind of memory called
D-RAM to store information. That memory is volatile: when you turn off the
device, all information is lost. When you turn on the device again, that
information has to be copied back into memory from a magnetic disk.
Computer memory is made of
transistors, and transistors are "volatile", i.e. they must be continually powered in order to preserve information.
That's why digital devices need to "boot-up".
There is another way to build computers: using memristors instead of transistors.
Memristors are a nonvolatile technology: they don't lose their information
when the power is turned off.
Memristors had been theoretically discussed by Leon Chua at UC Berkeley in 1971,
but to prove their existence you need to work at nanoscale, which was not
possible until recently.
Finally in 2008 Stan Williams at Hewlett-Packard proved the existence and practicality of "memristors".
A memristor is neither a resistor nor a capacitor nor an inductor. It is a fourth fundamental circuit element with properties that cannot be achieved by any combination of the other three.
A memristor behaves like a synapsis in the brain: a memristor's behavior
depends on the history of the current that has flowed through it,
just like the "strength" of synapses depend on how often they are used.
Today's neural networks are not hardware devices: they are software
simulations of neural networks, simulations that run on traditional
computers. All the "deep learning" of today's Artificial Intelligence
is, in reality, computational mathematics performed on computing machines.
Building a neural network in hardware is not easy with transistors,
that are designed for binary logic, i.e. for digital devices, not for
analog devices; but a memristor is "analog" and therefore a better candidate
to simulate the analog synapses of the brain.
In 2010 scientists at the University of Michigan mixed semiconductor neurons and memristor synapses (the paper is "Nanoscale Memristor Device as Synapse in Neuromorphic Systems" in Journal of American Chemical Society, 2010).
In 2015 Dmitri Strukov's team at UC Santa Barbara built a neural network of about 100 artificial synapses made of metal-oxide memristors (the paper is "Training and Operation of an Integrated Neuromorphic Network Based on Metal-oxide Memristors" in Nature, 2015) and Russian scientists at the Kurchatov Institute created a neural network based using plastic memristors (the paper is "Hardware elementary perceptron based on polyaniline memristive devices" in Organic Electronics, 2015).
In 2015 New Mexico-based startup Knowm announced that they have built an analog chip using memristors specifically for the applications of machine learning.
Magnetic storage can also benefit from Nanotech.
In 2011 Andreas Heinrich's team at IBM's Almaden Research Center in San Jose
reduced from about one million to 12 the number of atoms required to store
a bit of data. In practice, this meant the feasibility of magnetic memories 100 times denser than the most popular hard disks and memory chips.
But the fascinating fact of nanotech is the power to control matter at the
atomic scale. For example,
in 2013 this same Heinrich proved to the world the power of this technology by making the world's smallest movie, "A Boy and His Atom" (https://www.youtube.com/watch?v=oSCX78-8-q0), an animation movie in which the moving dots are single atoms.
In 2012 Michelle Simmons at the University of New South Wales and Gerhard Klimeck at Purdue University created a transistor from a single atom
(an atom of phosphorous).
In 2016 Sander Otte's team at Delft University in the Netherlands encoded two paragraphs of Feynman's speech "There's Plenty of Room at the Bottom". at the atomic level. In theory, the technique they used will allow to store 10 trillion bytes per square centimeter, i.e. ten million megabytes in a square centimeter. Imagine one million of 10-megabyte flash drives in one square centimeter.
Narnia: Is the future of computers in quantum computing?
piero:
Feynman's original idea couldn't possibly work because he assumed that it was possible to build a quantum computer that was only "locally" connected. Bell's theorem of 1964 proved that quantum mechanics is nonlocal (the property that Einstein didn't believe).
In 1992 David Deutsch, a physicist at Oxford University, published a class of problems that can be solved more efficiently by quantum computation than by classical methods.
In 1994 Peter Schor published a quantum algorithm for finding the prime factors of an integer in a reasonable amount of time. Finding the prime factors of an integer is such a time-consuming task that
RSA, the widely used algorithm for encryption, is based precisely on
factoring large integers. Schor basically showed that for a quantum computer it would be trivial to crack the most popular encryption method.
In 1996 David DiVincenzo published an influential paper about the desired architecture of a quantum computer. The following year DiVincenzo and Daniel Loss also sketched a possible quantum computer which would use qubits made of electron spins in quantum dots.
In 1997 British physicist Colin Williams and Xerox PARC's Scott Clearwater published a book titled "Explorations in Quantum Computing" in which they described how to build a quantum computer.
In 1999 two quantum physicists, Geordie Rose and Alexandre Zagoskin, founded D-Wave in Canada to build quantum computers. In 2007 D-Wave demonstrated its first prototype at the Computer History Museum in Mountain View, although many experts don't believe that it is a real quantum computer, and in 2011 it sold its first commercial computer. Experts still doubt that D-Wave's computer is really a quantum computer, but D-Wave investors include Amazon's founder Jeff Bezos and In-Q-Tel (the CIA), and its customers include NASA and Google.
The engineering problems are not trivial.
The lifetime of qubits is measured in milliseconds, and the quantum entanglement that links them together (commonly known as "quantum teleportation") works at distances of micrometres.
The most exciting research is probably going on at the Joint Quantum Institute (JQI) that was established in 2006 by the National Institute of Standards and Technology (NIST), the National Security Agency (NSA) and the University of Maryland (located near Washington).
In 2009 NIST unveiled a universal programmable quantum computer, but the
achievement was mainly theoretical, with little or no practical applications.
In 2013
Marc Warner's team at the London Centre for Nanotechnology discovered that the electrons in a dye called "copper phthalocyanine" remain in superposition for long times: maybe they discovered the silicon of quantum computing.
In 2014 Delft University in the Netherlands teleported information between two quantum bits separated by three meters with an error rate of zero, which was
a major achievement.
In 2015 NIST transferred quantum information at a distance of over 100 kms,
but transferring quantum information between long-lived qubits is still a
very hard problem.
One of NIST's scientists, David Wineland, was awarded the Nobel Prize in 2015.
In 2016 Christopher Monroe's team at the University of Maryland unveiled
five-qubit modules that one could combine to create quantum computers with
larger numbers of qubits. Moore (who had already built a quantum processor
in 2006 at the University of Michigan) used ions of an element called ytterbium
(an element with atomic number 70).
D-Wave claims to have built a quantum computer capable of more than 1,000
qubits, but scientists doubt it. Monroe's experiment, instead, can be
replicated by any university.
In 2016 IBM made a five-qubit available on the cloud, i.e. it launched the first cloud-based quantum computing platform. Called the Quantum Experience, this quantum computer was physically stored at its research laboratories in New York state in a room refrigerator at almost absolute zero temperature.
There are two main problems for building a quantum computer.
The first one is that most quantum computers
use superconducting circuits because quantum computing is easier with
superconductors; but superconductivity requires very low temperatures,
so the cooling process can be very expensive. The second problem is that
superconducting qubits can be rather unreliable (it's in the nature of
quantum objects).
Google and IBM are very active in this field. In 2013 Google bought a D-Wave machine,
which it keeps at a NASA laboratory in Mountain View, and in 2014 Google
contracted John Martinis, a professor who at UC Santa Barbara worked
on qubits for more than ten years. The quality of qubits is not a detail:
D-Wave's qubits are not as reliable as Martinis'.
This really started a competition with IBM
In 2015 Martinis' team at UC Barbara delivered a highly-reliable architecture of
nine qubits arranged in a line. Months later Jay Gambetta's team at IBM
in New York State
responded with a similar architecture of just
four qubits arranged in a two-by-two array.
They are racing to build
a universal quantum computer, that will probably have about 100 qubits.
Bob Willet at Bell Labs and Michael Freedman at Microsoft are pursuing a
different kind of qubit, the "topological qubit", hoping that it will not
have the problems of the superconducting qubit.
Intel is experimenting with silicon qubits on the regular "wafers" that Intel uses for its silicon chips instead of superconducting qubits.
Qubits can be "manufactured" in different ways:
energy levels, electron spins, and... states of the photon.
Photons are difficult to manage but they offer two big advantages:
they preserve entanglement over long distances and for a long time.
In 2016 Roberto Morandotti at Institut National De Recherche Scientifique (INRS) in Canada and his team demonstrated complex entangled quantum states on an optical chip.
In 2016 Xi-Lin Wang at the University of Science and Technology of China in Heifu produced 10-photon entanglement.
The quantum computer with the most qubits in 2016 is
at Rainer Blatt's laboratory in Austria, and it has only 20 qubits.
So far there is very little to show, but it is a good sign that scientists
are starting companies:
Christopher Monroe of the University of Maryland co-founded IonQ,
Robert Schoelkopf of Yale University co-founded Quantum Circuits,
former IBM physicist Chad Rigetti founded Rigetti in Berkeley, etc.
The phrase "quantum supremacy" was coined in 2012 by the physicist John Preskill. It refers to the moment when a quantum computer will make a calculation much faster than the fastest supercomputer in the world. Google claims that it happened in 2019 when a team led by John Martinis, usually a professor at UC Santa Barbara, used a quantum processor called "Sycamore" to make a calculation a lot faster than any existing supercomputer.
Narnia: What next?
piero:
Narnia: what is the limitation of Nanotech?
piero:
Scientists all the over the world are working on new techniques to lower the cost of nanotech manufacturing. Here in the Bay Area a popular technique is colloidal synthesis, explored by Paul Alivisatos at UC Berkeley since at least 1996. Nanoimprint lithography was introduced in 1995 by Stephen Chou at the University of Minnesota. In 2012 Juergen Stampfl at Vienna University of Technology demonstrated an impressive use of two-photon-lithography. The "additive manufacturing" process of 3D printing has been used for increasingly small objects. In 2014 Ho-Young Kim at Seoul National University used it to build nano-objects.
Colin Raston's "Vortex Fluidic Device" (VFD), for which he won a Nobel Prize in 2015, has been shown to be useful in making the kind of precise carbon nanotubes that are necessary for practical applications. ("Fluid dynamic lateral slicing of high tensile strength carbon nanotubes" in Scientific Reports, 2016).
If one of these dramatically reduces the cost of nanotech manufacturing,
nanotech will take off in grand style.
Another way to solve the problem is to program nano-particles so that they
self-assemble into complex structures. This is what Nature does with proteins.
Ting Xu at the Lawrence Berkeley National Laboratory works on self-assembling nano-particles. In 2014 she demonstrated nano-particles that formed a highly ordered thin film in one minute.
(The paper is titled "Rapid Fabrication of Hierarchically Structured Supramolecular Nanocomposite Thin Films in one Minute" in Nature Communications, 2014).
In 2015 she collaborated with Katherine Ferrara at UC Davis and with John Forsayeth and Krystof Bankiewicz at UC San Francisco to create self-assembling nano-carriers that can transport chemicals into the brain to fight cancer.
Narnia: What about borophene?
piero:
Narnia: Why isn't Nanotech as popular as virtual reality and artificial intelligence?
piero:
Narnia: How is nanotech perceived in Silicon Valley these days, ten years after the nanotech bubble?
piero:
What was the problem? The problem was and is that nanotechnology is not an
industry. It is a technology that benefits many different industries.
There is no Apple and no Facebook of nanotech.
But nanotech can have a massive impact on the multi-billion dollar industries of
displays, batteries and semiconductors.
The semiconductors industry, for example, moved to the 65-nanometer
manufacturing process in 2007. That's nanotech, but few people called it
"nano".
Most biotech is "nano" because it works at the molecular level.
For investors the fundamental problem was that
nanotech applications take a long time before they can generate revenues.
The "time to market" is much longer than, say, for software.
The venture funds like a lifespan of five years. That was not realistic in
the 2000s. It may become realistic very soon, thanks to the new techniques
of nano-manufacturing.
So it is a bit depressing that today we get excited by Nest's "smart devices",
which are really old-fashioned smoke detectors and thermostats. Congratulations
to the designers who made them look "cool" but these devices haven't changed
in decades. Imagine the revolution that would happen if these devices shrank
to the size of a pin: you just pin them to the wall.
My question for the critics of nanotech is simple: what will happen if nanotech
fails? Moore's Law will probably stop, which means that there will only be
very small progress in digital devices. We are so used that digital devices
become obsolete so quickly, but it could be that in the future digital devices
will no change, just like smoke detectors and thermostats have not changed
in a long time.
The world will be a lot more static and boring if nanotech fails.
Christine Peterson, cofounder of the Foresight Institute.
Jennifer Dionne, Stanford Univ
|