10 min read

20 Metatrends for the Roaring 20s

By Peter H. Diamandis on Jan 5, 2020

In the decade ahead, waves of exponential technological advancements are stacking atop one another, eclipsing decades of breakthroughs in scale and impact.

Topics: 3D Printing AR/VR Manufacturing Sensors Entrepreneurship Finance AI Exponentials Exponential Organizations space exploration Singularity machine learning networks 5G Augmented Reality trillion sensor economy Business Models Brain computer interface internet of things Spatial Web exponential technology bci brain machine interface energy abundance future of energy smart economy trends 2020s 2020 sustainability
8 min read

Abundance Insider: December 21st, 2019

By Peter H. Diamandis on Dec 21, 2019

In this week's Abundance Insider: AI-induced super resolution, robotic safety inspectors, and Lamborghini’s inroads in 3D printing.

P.S. Send any tips to our team by clicking here, and send your friends and family to this link to subscribe to Abundance Insider.

P.P.S. Want to learn more about exponential technologies and home in on your MTP/ Moonshot? Abundance Digital, a Singularity University Program, includes 100+ hours of coursework and video archives for entrepreneurs like you. Keep up to date on exponential news and get feedback on your boldest ideas from an experienced, supportive community. Click here to learn more and sign up.

P.P.P.S. Want a chance to read Peter’s upcoming book before anyone else? Join the Future is Faster Than You Think launch team (applications close on December 6th)! Get an advanced digital copy, access to our private Facebook group, behind the scenes specials, a live Q&A with Peter and Steven, and hundreds of dollars in exclusive bonuses. Click here for details.

Share Abundance Insider on LinkedIn | Facebook | Twitter.

It’s Not You. Clothing Sizes Are Broken.

What it is: Size and fit are two of the leading reasons for online returns, according to e-commerce software company Narvar Inc., translating to costs that further reduce retailers’ already slim profit margins. From 3D body-scanning apps like MTailer and My Size, to startup Shima Seiki’s machines that knit garments with less than 1% variation, a plethora of companies has recently emerged to combat the issue of inconsistent sizing. Women’s sizes in the U.S. range from 00 to 18, yet there are no standardized body metrics across these sizes. This type of variation is not represented in online sizing guides, and few explain the stretch or texture of the fabric, which may also affect fit. Solutions like those offered by True Fit Corp.—which uses a data platform and AI-driven personalized recommendation engine to help consumers find their right size and taste-tailored items—are growing in demand from major retailers. Others, like RedThread, use 3D mobile body scanning and tailoring algorithms to best determine fit.

Why it’s important: Some executives, like Levi Strauss & Co.’s CEO Chip Bergh, believe sizes will become obsolete in the next decade. Smartphone-conducted body scans will offer precise measurements that automatically populate online retail platforms. From there, fits can be matched with existing designs or tailored with programmed sewing machines. Offering an even more personalized fit, 3D-printed garments are also on the rise, changing the economics of mass manufacturing. As retail sales continue to migrate to online platforms, virtual try-on software is slated to decimate returns—now a major pain point for both the retailer and the consumer. Yet the convergence of these technologies will not only cut costs, but will also dramatically reduce the environmental toll of shipping, packaging, and textile waste.

AI super resolution lets you “zoom and enhance” in Pixelmator Pro.

What it is: For just $60, Pixelmator is making the “zoom and enhance” trope seen in movies (the ability to zoom into images and retain sharpness) a reality. Using AI algorithms, Pixelmator’s “ML Super Resolution” is a novel function that allows users to scale an image up to 3X its original resolution without pixelation or blurriness. Similar to Google’s and Nvidia’s algorithms, Pixelmator’s software is trained on a dataset containing pairs of low-resolution and high-resolution images and thereby generates rules for how the pixels change from image to image. Pixelmator, however, is about 50 times smaller (than its Google and Nvidia counterparts) at just 5MB, which is lightweight enough to run on a device and needs merely 15,000 sample images to create the tool.

Why it’s important: In just the past 12 months, we’ve seen an explosion in AI and machine learning tool sets newly democratized for accessible consumer use. Yet many have required significant computing resources for top performance. Now, however, products like Pixelmator’s “ML Super Resolution” have achieved powerful algorithms trained on significantly lighter data sets that require far less memory and power. Particularly in the art and imaging realm, the availability of such algorithms to end users will lower the barrier for artists, filmmakers, and small firms in everything from design to marketing.

Lamborghini places emphasis on additive manufacturing, extends partnership with Carbon.

What it is: 3D printing company Carbon has just expanded its partnership with Lamborghini. Famous for its Digital Light Synthesis (DLS) technology—which prints components using a photochemical process leveraging oxygen and light—Carbon plans to use DLS to manufacture the dashboard air vents for Lamborghini’s first hybrid production car, the Sián FKP 37. This development follows Carbon’s earlier work in partnership with the car maker, whereby it produced textured fuel caps and air duct clips for the Urus Super Vehicles. Successful in reducing Lamborghini’s production time to just 12 weeks, Carbon’s DLS can produce geometric shapes that are extraordinarily difficult to mold using traditional processes, which often include multiple iterations on the design.

Why it’s important: 3D printing is transforming the manufacturing industry (literally) from the bottom-up, whether in production of minute, customized and complex automotive parts to rocket engine parts and organ tissues. We’re rapidly entering an era of programmable production, allowing for far cheaper, more versatile, and quickly prototyped goods. As 3D printing technologies move from deceptive to disruptive, what potential uses might you experiment with in your own business?

Building robotic safety inspectors nabs Gecko Robotics $40 million.

What it is: Pittsburgh-based Gecko Robotics has just landed US$40 million in additional financing, which it will use to add an additional 40 robots to its 60-bot fleet, helping meet demand for the company’s safety and infrastructure monitoring services. Gecko’s wall-climbing robots perform non-destructive testing on industrial machinery like tanks and boilers, assessing metrics like wall thickness, cracking, and pitting. Gecko’s robots can even predictively detect other issues likely to result in downtime or more serious hazards, such as explosions and emissions leaks.

Why it’s important: While much of today’s public debate on robotics centers around the replacement of human labor, one emerging phenomenon in the industry involves preventative, automated approaches to safety and compliance use cases. In many of these cases, robotics and software services like that of Gecko are augmenting human experts’ capabilities by granting them new data, which would otherwise be extremely difficult or hazardous to collect manually. Increasingly a collaborator for human practitioners, robotics and AI are beginning to tackle industrial monitoring tasks that have never before been possible, preventing infrastructural and machinery damage before it occurs.

How artificial intelligence is making health care more human.

What it is: MIT Technology Review Insights, in association with GE Healthcare, recently released survey results of over 900 healthcare professionals, revealing the ways in which AI is already being used in healthcare. Nearly 80% of respondents are set to increase their budgets on AI applications in 2020. And today, the key areas in which AI is already being deployed include: (1) AI for patient flow optimization; (2) medical imaging and diagnostics; (3) automation of electronic health records via natural language processing tools; (4) AI for predictive analytics; and (5) patient data and risk  analytics. In terms of outcomes, 78% of medical staffers report that AI deployments have already improved workflows, reducing time spent on mundane administrative tasks and thus unlocking more time for procedures and patient interactions. Even more importantly, AI is reducing clinical errors, and 75% of AI-using medical staff agree that the technology has bettered predictions in disease treatment.

Why it’s important: AI is transforming the healthcare system as we know it, touching everything from diagnostics to drug discovery. In the wake of “smart” patient scheduling tools, even the number of patients seen by doctors per day has been maximized. And AI is even helping optimize outcomes of the appointments themselves. Medical professionals typically spend 10% of their workweek taking notes or updating electronic health records. As AI begins to systematize these repetitive tasks, doctors are freed to dedicate more time to procedures and patient relations. Applying AI algorithms to medical imaging has also already improved clinical decision-making. For reference, surveyed doctors who have yet to adopt AI report clinical error as their key challenge two-thirds of the time (more than double the figure for those who have adopted AI tools). Moving forward, doctors and healthcare workers must continue to collaborate with machines, leveraging comprehensive pools of AI-mediated data to make important medical decisions. An invaluable new collaborator, AI is helping doctors and clinicians focus on what they do best, helping humanize the healthcare industry and improve the patient experience.

Want more conversations like this?

Abundance 360 is a curated global community of 360 entrepreneurs, executives, and investors committed to understanding and leveraging exponential technologies to transform their businesses. A 3-day mastermind at the start of each year gives members information, insights and implementation tools to learn what technologies are going from deceptive to disruptive and are converging to create new business opportunities. To learn more and apply, visit A360.com.

Abundance Digital, a Singularity University program, is an online educational portal and community of abundance-minded entrepreneurs. You’ll find weekly video updates from Peter, a curated news feed of exponential news, and a place to share your bold ideas. Click here to learn more and sign up.

Know someone who would benefit from getting Abundance Insider? Send them to this link to sign up.

(*Both Abundance 360 and Abundance Digital are Singularity University programs.)

Topics: Abundance Insider AR/VR AI machine learning Artificial Intellegence Batteries solar energy drone technology social responsibility
8 min read

Abundance Insider: December 13th, 2019

By Peter H. Diamandis on Dec 13, 2019

In this week's Abundance Insider: Coca-Cola’s autonomous truck pilot, a new approach to computer vision, and the mysterious ‘X17 particle.’

P.S. Send any tips to our team by clicking here, and send your friends and family to this link to subscribe to Abundance Insider.

P.P.S. Want to learn more about exponential technologies and home in on your MTP/ Moonshot? Abundance Digital, a Singularity University Program, includes 100+ hours of coursework and video archives for entrepreneurs like you. Keep up to date on exponential news and get feedback on your boldest ideas from an experienced, supportive community. Click here to learn more and sign up.

P.P.P.S. Want a chance to read Peter’s upcoming book before anyone else? Join the Future is Faster Than You Think launch team (applications close on December 6th)! Get an advanced digital copy, access to our private Facebook group, behind the scenes specials, a live Q&A with Peter and Steven, and hundreds of dollars in exclusive bonuses. Click here for details.

Share Abundance Insider on LinkedIn | Facebook | Twitter.

High-Tech Planes, Supercomputers and Helitankers Help Fight Wildfires.

What it is: Firefighters are increasingly adopting sophisticated technologies in the fight against blazes. Fire departments across Southern California have now partnered with Dr. llkay Altintas, head of the WIFIRE Lab and the San Diego Supercomputer Center (part of UCSD). WIFIRE combines weather data, topography, and information about the dryness of brush to model in near-real time how a wildfire might spread and at what speeds. This, in turn, helps local leaders create evacuation plans and determine where departments might deploy fire crews. Until recently, mapping fires has been a laborious, hard-drawn process that often requires as much as a day of work. Yet armed with far more accurate data, firefighters and partners such as Coulson Aviation are now using military-grade night-vision goggles to operate at night, when winds often die down and give teams an advantage over the fire. The night vision goggles allow teams to determine key geographic targets as well as hover for water refills without having to land their helitankers.

Why it’s important: As the cost of computing power plummets, converging technologies are beginning to aid in disaster relief at price tags now affordable for budget-strapped state and local governments. While AI grants fire departments far more predictive capacity and higher mapping speeds, its hardware counterparts (whether drones, sensors, or the like) are finding their way into other realms of disaster relief, and even disaster prevention.

Coca-Cola test-drives Einride’s autonomous truck in Sweden.

What it is: Coca-Cola European Partners (CCEP) will soon release a fleet of Einride autonomous electric transport vehicles onto the streets of Jordbro, Sweden. Founded in 2016, Einride has produced sleek “T-Pod” electric propulsion trucks that do not require a driver, though there are still remote drivers who can take control if needed. Currently, the T-Pods carry 200kWh batteries that allow for 124 miles of travel between charges. The fleet will transport goods from two warehouses, operated by CCEP and leading food retailer Axfood, just outside of Stockholm. Some will remain in fenced regions while others will interact on public roads.

Why it’s important: Road freight transport contributes about 7 percent of global carbon dioxide emissions each year. CCEP aims to use these Einride vehicles to meet its sustainability and efficiency goals. The company projects it could cut its carbon dioxide emissions by up to 90 percent with these new vehicles. After a hoped-for success of this pilot test, the fleet could even potentially scale across the nation of Sweden, throughout which CCEP distributes Coca-Cola products. Sustainable supply chains will grow increasingly important as consumers desire greater transparency in their purchasing decisions and place more emphasis on environmentally responsible goods.

Observe.ai raises $26 million for AI that monitors and coaches call center agents.

What it is: While numerous Software-as-a-service (SaaS) platforms are beginning to disrupt the customer service realm, some SaaS products are designed to augment human customer care workers. One example involves U.S.-Indian startup and Y Combinator alum Observe.ai, which just announced a $26 million series A funding round. Observe.ai uses natural language processing (NLP) to analyze conversations between human agents and customers. After transcribing each call, Observe’s platform runs sentiment analysis, draws correlations between the support agent’s verbal and behavioral data and the customer’s happiness level, and then ultimately determines overall customer satisfaction. This data is then used to benchmark top performers and find best practices across teams. Results can even be applied to other discrete use cases, such as monitoring compliance in the healthcare industry, where conversations involve sensitive and often legally protected information.

Why it’s important: Observe and a number of other companies—NICE, Verint, Cogito, Gong, Chorus.ai, among others—make up a growing number of companies using AI improve the connection between humans, as opposed to replacing it outright. While many fear the encroachment of AI and automation on our contemporary job market, in what areas might we flip this concern? How might we leverage AIs to help augment our social and professional skills, provide a better service, or gain common ground with our clients?

Machine vision that sees things more the way we do is easier for us to understand.

What it is: Researchers have devised a new method for training neural networks in image recognition. Rather than training their model on full images of birds, scientists from Duke University and MIT’s Lincoln Laboratory trained a network specifically on features of birds: beak shape, head shape, feather coloration and the like. When the algorithm is then presented with a new picture of a bird, it searches for specific features, generates predictions about the bird’s species, and uses the cumulative evidence to come to a final conclusion.

Why it’s important: Recently, the push to make neural networks more explainable and transparent has gained significant traction in both the private sector and academia. Especially in the case of high-stakes applications—such as medical image recognition—AIs that can demonstrate which features contributed to its decision will help to solve the longstanding “black box” problem associated with today’s algorithms. By engineering neural networks to devise predictions in a manner more akin to our own human thought processes, AI engineers will also be able to more easily diagnose problems when networks make incorrect predictions.

A nanotube material conducts heat in just one direction.

What it is: Scientists at the University of Tokyo have now developed a method of synthesizing aligned carbon nanotubes. Normally, producing nanotubes in a bulk material results in poorly aligned configurations of individual tubes. Yet in order to take advantage of the thermal properties of the tube, it is necessary to align the tubes end-to-end. To achieve this, the researchers used a technique known as controlled vacuum filtration, a procedure whereby nanotubes are mixed with a liquid solution whose properties induce a natural self-organization of the tubes. The liquid is then carefully removed with a vacuum, leaving a thin sheet of highly-aligned nanotubes. This sheet has some extraordinary properties: perhaps most importantly, it has one-way thermal conductivity. This means that the sheet can conduct heat about 1,000 times more efficiently in the direction of the alignment than perpendicular to the alignment.

Why it’s important: Heat leakage is a tremendous problem for electrical engineers and circuit designers. This one-way thermal conduction material could serve as a game-changing solution, as it mitigates the need for large cooling systems and can interact at the nanoscale (the size of modern-day transistors). Needless to say, more efficient cooling systems will open tremendous new possibilities in design for computer hardware engineers.

Mysterious ‘Particle X17’ Could Carry a Newfound Fifth Force of Nature, But Most Experts Are Skeptical.

What it is: Four fundamental forces—gravity, electromagnetism, the strong force, and the weak force—govern the universe as we know it. Yet the reported discovery of a particle dubbed X17 could add a fifth force to this list. Researchers at the Institute of Nuclear Research in Hungary first reported evidence of the particle in 2016, when they noticed radioactive beryllium atoms releasing pairs of electrons and their antimatter counterparts (positrons) at specific angles. Based on this evidence, the team concluded that there must be an intermediary “particle X” that the beryllium atom converts into before emitting the electron-positron pairs. With a mass of 17 megaelectronvolts, the particle earned its name X17. More recently, the team even detected a similar X17 particle of the same mass in the radioactive decay of helium. While most matter is made up of fermion particles, the X17 particle is considered a boson, meaning it carries energy and sometimes forces.

Why it’s important: Studying the X17 boson-type particle could unlock important insights into the nature of dark matter and potentially even a fifth force. Dark matter constitutes 85 percent of matter in the universe, yet is only detectable through gravity and does not react to light. The globally held Standard Model of particle physics could be revolutionized by this finding. Most research in the past fifty years has relied heavily on high-energy accelerators to collide particles at rapid speeds, but this team’s work offers a lower-cost alternative to understanding our universe. While findings have not yet been peer-reviewed, several groups are working to verify the Hungarian research institute’s work, driving progress towards a more accurate understanding of the matter that makes up our universe.

Want more conversations like this?

Abundance 360 is a curated global community of 360 entrepreneurs, executives, and investors committed to understanding and leveraging exponential technologies to transform their businesses. A 3-day mastermind at the start of each year gives members information, insights and implementation tools to learn what technologies are going from deceptive to disruptive and are converging to create new business opportunities. To learn more and apply, visit A360.com.

Abundance Digital, a Singularity University program, is an online educational portal and community of abundance-minded entrepreneurs. You’ll find weekly video updates from Peter, a curated news feed of exponential news, and a place to share your bold ideas. Click here to learn more and sign up.

Know someone who would benefit from getting Abundance Insider? Send them to this link to sign up.

(*Both Abundance 360 and Abundance Digital are Singularity University programs.)

Topics: Abundance Insider AR/VR AI machine learning Artificial Intellegence Batteries solar energy drone technology social responsibility
8 min read

Abundance Insider: December 6th, 2019

By Peter H. Diamandis on Dec 6, 2019

In this week's Abundance Insider: DeepMind’s latest AI win, a promising treatment candidate for pancreatic cancer, and 5 emerging energy technologies.

P.S. Send any tips to our team by clicking here, and send your friends and family to this link to subscribe to Abundance Insider.

P.P.S. Want to learn more about exponential technologies and home in on your MTP/ Moonshot? Abundance Digital, a Singularity University Program, includes 100+ hours of coursework and video archives for entrepreneurs like you. Keep up to date on exponential news and get feedback on your boldest ideas from an experienced, supportive community. Click here to learn more and sign up.

P.P.P.S. Want a chance to read Peter’s upcoming book before anyone else? Join the Future is Faster Than You Think launch team (applications close on December 6th)! Get an advanced digital copy, access to our private Facebook group, behind the scenes specials, a live Q&A with Peter and Steven, and hundreds of dollars in exclusive bonuses. Click here for details.

Share Abundance Insider on LinkedIn | Facebook | Twitter.

Google DeepMind gamifies memory with its latest AI work.

What it is: If you’ve ever wished you could go back in time to tell your younger self a critical piece of advice, AIs may soon be able to do just that within their own networks. Google’s DeepMind unit recently announced a program that resembles the human capacity to mentally time travel by incorporating long-term consequences into machine learning. AI programs typically rely on reinforcement learning with short-term, immediate “rewards.” DeepMind’s new program, called Temporal Value Transport (TVT), transforms reinforcement learning by sending reward signals backwards from far in the future as an alternative form of neural networks. The program operates in simulated worlds, and it might “explore” a path to a certain target. If the program uses its memory of this path in a future pursuit to the same target, it is rewarded. This process, termed the “Reconstructive Memory Agent,” marks the first time memories of past events have been “encoded.”

Why it’s important: Many sociologists and economists have explored the realm of long-term human decision-making. While DeepMind’s TVT is not entirely representative of human thought, the cognitive mechanisms of the program could greatly impact human thought processes. We easily learn to avoid hot stoves after accidentally burning our hand once. Yet many of us fall into the long-term pattern of following an unfulfilling career path. Because long-term decisions lack immediate feedback, the signs pointing us in the “right direction” are difficult to detect and learn from early on. With the help of AIs that generate future pathways and then inform us of consequences in the present, humans could learn in entirely new ways. From investment decisions to government policy, wisdom from the future will undoubtedly aid our present choices.entertainment, and human interaction.

Jet-powered VTOL drone is like a quadcopter on steroids.

What it is: Texas-based FusionFlight has just invented a jet-powered drone capable of vertical take-off and landing operations. Yet rather than using propellers and electric motors like traditional drones, this drone ups the ante, using four diesel-powered microturbine jet engines and a proprietary vectoring system. Known as the H-Configuration, this latter component enables the drone to direct its engines’ thrust either vertically (for take-off and landing) or horizontally (while in flight). Reportedly capable of reaching a top speed of over 300 mph, the aircraft’s final production version includes a fuel tank sufficient for 30 minutes of hovering and 15 minutes of cruising. Down the line, FusionFlight aims to boost speed and performance with afterburners and other components.

Why it’s important: Drones are rapidly permeating our airspace. They are now used for crop monitoring, military combat, delivery services, and viral YouTube videos. FusionFlight’s newest iteration expands the range of possibilities for drones, especially in the case of time-sensitive tasks, where speed is key. Furthermore, enabled by its jets’ production of a combined 200 horsepower, the drone can carry up to 40 pounds of cargo, making it an ideal candidate for shipping and delivery applications.

Israeli scientists find a way to treat deadly pancreatic cancer in 14 days.

What it is: After just two weeks of daily injections, a new treatment reduced the number of cancerous pancreatic cells in mice by up to 90 percent. Led by Professor Malka Cohen-Armonat at Tel Aviv University, the team used a molecule called PJ34, one originally developed to treat stroke patients. After implanting human pancreatic cancer into immune-suppressed mice, the team intravenously injected the PJ34 treatment for fourteen days. During the cell replication process known as mitosis, the PJ34 molecule causes an anomaly that triggers the cell to self-destruct. In cancer cells that are duplicating uncontrollably, this type of stop signal is critical to controlling the tumor. Only 30 days after the treatment ended, an 80-90 percent reduction in cancer cells was observed, accompanied by zero negative impacts to healthy cells.

Why it’s important: Pancreatic cancer is one of the most difficult cancers to treat, and few patients survive more than five years after diagnosis. Today, most treatment options involve chemotherapy, a systemic approach aimed at halting cell division in the entire body. Yet because this form of therapy lacks discriminatory targeting, cell replication slows across the entire body, causing many patients to experience negative side effects like hair loss, inflammation of the digestive tract, and decreased blood cell production. A solution like PJ34, which specifically targets only cancer cells, could revolutionize cancer therapy and significantly enhance patient quality of life. Venturing beyond pancreatic cancer, the team even successfully tested the treatment on cell cultures of aggressive forms of breast, lung, brain and ovarian cancer. According to the team, this treatment is about two years away from human trials, potentially promising a major boost to healthy human lifespans.

5 Emerging Energy Technologies to Watch Out For in 2020.

The story: This year, technologies in solar, wind, and battery storage have achieved remarkable economies of scale and now compete almost at parity with fossil fuels. In the coming year, breakthrough after breakthrough may finally usher in a watershed moment for the energy sector, and experts recommend keeping an eye on several key areas.

What to watch: (1) Floating solar arrays have surged in popularity for use on freshwater bodies, but photovoltaic solar panels are now moving to the open ocean. (2) Static compressors, which help to maintain the constant frequency of electric power grids, are starting to see an uptick in certain countries and should help with overall incorporation of renewables into the power grid. (3) Several companies are now working to increase the power capacity of dynamic export cables. These are critical to bringing power from offshore floating wind turbines (as opposed to static turbines fixed to the seafloor) back to shore. (4) Now backed by significant funding, molten salt reactors are a new form of nuclear power that promise to emit less radiation than traditional nuclear. (5) Renewably produced hydrogen has witnessed considerable growth in at least 10 countries, with projected utility in everything from industrial heating and cooling to the integration of renewables into the grid. As plummeting renewable energy costs and improved grid storage propel us into 2020, we may soon expect dramatic shifts in the global energy economy.

SLAC scientists invent a way to see attosecond electron motions with an X-ray laser.

What it is: Researchers at Stanford University have developed a method to measure electrons at an unfathomable timescale: 280 attoseconds, to be precise. For reference, an attosecond is to a second what a second is to roughly 31.71 billion years, longer than the age of the universe. To achieve this, the researchers developed a procedure involving X-ray bursts generated by fast-moving electron bursts. To see at smaller and smaller timescales, scientists needed to create shorter and more intense bursts. These bursts, in turn, create the requisite intense and fast X-rays when they are passed through a magnet. Ultimately, the Stanford scientists were able to develop a more capable beam using a technique called XLEAP, first proposed about 14 years ago but now finally coming to fruition.

Why it’s important: This is a tremendous boost for ultrafast science. “Until now, we could precisely observe the motions of atomic nuclei, but the much faster electron motions that actually drive chemical reactions were blurred out,” explained SLAC scientist James Cryan, one of the paper’s lead authors and an investigator with the Stanford PULSE Institute (a joint institute of SLAC and Stanford University). “With this advance, we’ll be able to use an X-ray laser to see how electrons move around and how that sets the stage for the chemistry that follows. It pushes the frontiers of ultrafast science.” What does this mean? Now capable of observing at infinitesimal scales, we may soon probe some of the world’s most fundamental mysteries, particularly in photosynthesis and biochemistry.

Want more conversations like this?

Abundance 360 is a curated global community of 360 entrepreneurs, executives, and investors committed to understanding and leveraging exponential technologies to transform their businesses. A 3-day mastermind at the start of each year gives members information, insights and implementation tools to learn what technologies are going from deceptive to disruptive and are converging to create new business opportunities. To learn more and apply, visit A360.com.

Abundance Digital, a Singularity University program, is an online educational portal and community of abundance-minded entrepreneurs. You’ll find weekly video updates from Peter, a curated news feed of exponential news, and a place to share your bold ideas. Click here to learn more and sign up.

The Future is Faster Than You Think: Want a chance to read my new book before anyone else? Join the Future is Faster Than You Think launch team (applications close on December 6th)! Get an advanced digital copy, access to our private Facebook group, behind the scenes specials, a live Q&A with Steven and me, and hundreds of dollars in exclusive bonuses. Click here for details.

Know someone who would benefit from getting Abundance Insider? Send them to this link to sign up.

(*Both Abundance 360 and Abundance Digital are Singularity University programs.)

Topics: Abundance Insider AR/VR AI machine learning Artificial Intellegence Batteries solar energy drone technology social responsibility
7 min read

Abundance Insider: November 29th, 2019

By Peter H. Diamandis on Nov 29, 2019

In this week's Abundance Insider: New haptic device for VR, socially aware algorithms, and NASA’s supermassive black hole finding.

P.S. Send any tips to our team by clicking here, and send your friends and family to this link to subscribe to Abundance Insider.

P.P.S. Want to learn more about exponential technologies and home in on your MTP/ Moonshot? Abundance Digital, a Singularity University Program, includes 100+ hours of coursework and video archives for entrepreneurs like you. Keep up to date on exponential news and get feedback on your boldest ideas from an experienced, supportive community. Click here to learn more and sign up.

P.P.P.S. Want a chance to read Peter’s upcoming book before anyone else? Join the Future is Faster Than You Think launch team (applications close on December 6th)! Get an advanced digital copy, access to our private Facebook group, behind the scenes specials, a live Q&A with Peter and Steven, and hundreds of dollars in exclusive bonuses. Click here for details.

Share Abundance Insider on LinkedIn | Facebook | Twitter.

New virtual reality interface enables “touch” across long distances.

What it is: A Northwestern University team has created a lightweight wearable patch that vibrates when activated by another user’s touch— from miles away. Using this technology, a mother was able to remotely “pat” her son on the back while video chatting him. As she touched a screen interface, this data was communicated through a haptic device on her son’s back, stimulating identical touch patterns. Most of today’s haptic feedback devices rely on batteries, requiring bulky containers that cannot fit snugly against the skin. By contrast, this new patch consists of a vibrating disk—only a few millimeters thick—that is powered by near-field communication, a wireless power transfer typically used in ID card locks. External silicone sheets protect the two inner layers of the device: one containing the near-field communication technology to power the device, and another holding miniature actuators that simulate various degrees of touch pressure. Led by physical chemist and materials scientist John A. Rogers, the team now aims to make the patch more flexible and lightweight before commercializing the device through their newly established startup.

Why it’s important: While today’s audiovisual interfaces have long captured our eyes and ears, incorporating the dimension of touch into our devices will add another layer of immersion in tomorrow’s digitally augmented world. For VR and AR devices, this haptic technology could transform virtual simulations into tactile physical environments—without any real materials. The Northwestern team’s device currently conveys only perpendicular pressure against the skin, but eventually the patch may be able to simulate even twisting motions or temperature changes. The technology will also likely expand beyond simple patches into full body suits, capable of translating touch interactions between individuals, or between game worlds and reality. The ability to see, hear, and feel in a digital simulation will drastically disrupt travel, entertainment, and human interaction.

New Amazon capabilities put machine learning in reach of more developers.

What it is: Amazon has just announced a new approach that will make machine learning models more accessible to both developers and business users. By taking advantage of tools like Amazon QuickSight, Aurora and Athena, anyone who can write in basic SQL can now make and use predictions in one’s applications without having to generate custom code. To make the process even easier, these machine learning models themselves can come pre-built from Amazon Web Services (AWS), be developed by an in-house data science team, or purchased in AWS’s ML marketplace.

Why it’s important: As explained by AWS cloud and open source executive Matt Asay, “there is often a large amount of fiddly, manual work required to take these predictions and make them part of a broader application, process or analytics dashboard.” Amazon’s initiative marks a significant step towards machine learning’s User Interface moment, removing friction and making AI’s predictive power more accessible to a large set of users. Keep on the lookout for a surge in easy-to-build applications and experiments as sophisticated Software as a Service (SaaS) products hit the marketplace.

Socially aware algorithms are ready to help.

What it is: In light of growing concern about AI’s obscure inner workings, software engineers and data scientists responsible for many of the algorithms involved in our everyday online activity have increasingly used more socially aware algorithmic structures. For instance, data scientists now use a technique known as “differential privacy” to add random “noise” to data sets, preserving the overall structure whilst obscuring individual data. This, in turn, helps to anonymize our data and thereby protect user privacy. Other techniques include the addition of fairness criteria, such that predictive models’ output—from creditworthiness to insurance-related decisions—minimize bias where possible.

Why it’s important: As machine learning algorithms are granted greater responsibility over socially consequential decisions (think: our ability to take out loans or a legal decision to grant bail), problems of privacy, bias, disinformation, filter bubbles, and transparency abound. As a result, AI engineers have begun working on algorithms’ ability to explain their decisions, overcoming their status as mysterious “black boxes.” Meanwhile, the above fairness conditions are a promising start in our pursuit to build equitable, unbiased, and evidence-based algorithms: predictive models that prove accurate without perpetuating “fake news,” racial inequalities, and a slew of other social challenges. Differential privacy, fairness conditions, and similar tweaks do result in some costs to algorithmic “utility” and error rate in the short-term. However, such initiatives will be essential for a future wherein machine learning helps safeguard equitable, systemic decision-making and privacy, while protecting against some of today’s worst institutional tendencies.

NASA finds supermassive black hole birthing stars at “furious rate.”

What it is: Scientists have now discovered a supermassive blackhole at the center of a distant galaxy cluster “furiously” birthing stars at a rate about 500 times that of the Milky Way Galaxy. Using data from the Hubble Space Telescope and NASA’s Chandra X-Ray Observatory, the team of astronomers was able to observe the equivalent of trillions of Suns’ worth of hot gas cooling around the black hole within the Phoenix Cluster, some 5.8 billion light years away.

Why it’s important: Typically, the supermassive blackholes at the center of galaxy clusters are too active for star formation. They usually blow powerful streams of gas around the region, heating up interstellar hydrogen and preventing the gas from cooling down enough to trigger the creation of new stars. However, as this blackhole in the Phoenix Cluster is smaller than others, its jets are not as powerful, allowing for prolific star formation. From a scientific perspective, observations like this enable us to better understand and characterize the lifecycle of galaxy clusters and the role that blackholes play in both preclusion and creation of new stars.

Want more conversations like this?

Abundance 360 is a curated global community of 360 entrepreneurs, executives, and investors committed to understanding and leveraging exponential technologies to transform their businesses. A 3-day mastermind at the start of each year gives members information, insights and implementation tools to learn what technologies are going from deceptive to disruptive and are converging to create new business opportunities. To learn more and apply, visit A360.com.

Abundance Digital, a Singularity University program, is an online educational portal and community of abundance-minded entrepreneurs. You’ll find weekly video updates from Peter, a curated news feed of exponential news, and a place to share your bold ideas. Click here to learn more and sign up.

The Future is Faster Than You Think: Want a chance to read my new book before anyone else? Join the Future is Faster Than You Think launch team (applications close on December 6th)! Get an advanced digital copy, access to our private Facebook group, behind the scenes specials, a live Q&A with Steven and me, and hundreds of dollars in exclusive bonuses. Click here for details.

Know someone who would benefit from getting Abundance Insider? Send them to this link to sign up.

(*Both Abundance 360 and Abundance Digital are Singularity University programs.)

Topics: Abundance Insider AR/VR AI space exploration machine learning Artificial Intellegence Batteries nasa social responsibility haptic devices
9 min read

Abundance Insider: November 1st, 2019

By Peter H. Diamandis on Nov 1, 2019

In this week's Abundance Insider: AR-aided surgeries, remote human brain-to-brain collaboration, and a new flu-targeting antibody.

P.S. Send any tips to our team by clicking here, and send your friends and family to this link to subscribe to Abundance Insider.

P.P.S. Want to learn more about exponential technologies and home in on your MTP/ Moonshot? Abundance Digital, a Singularity University Program, includes 100+ hours of coursework and video archives for entrepreneurs like you. Keep up to date on exponential news and get feedback on your boldest ideas from an experienced, supportive community. Click here to learn more and sign up.

Share Abundance Insider on LinkedIn | Share on Facebook | Share on Twitter.

MediView XR raises $4.5 million to give surgeons X-ray vision with AR.

What it is: MediView XR recently raised US$4.5 million to further develop its Extended Reality Surgical Navigation system. Accessed through the Microsoft Hololens, MediView’s product grants surgeons a form of “x-ray vision” when conducting cancer ablations and biopsies. The system generates a personalized 3D holographic model for each patient based on CT and MRI scans. Next, ultrasound imaging updates the holographic display throughout the procedure. This process not only mitigates harmful x-ray radiation used in standard procedures today, but also improves visual acuity by translating 2D data into three dimensions. Surgeons can even rotate around the body while AR-overlaid visuals remain accurately mapped to the patient. Meanwhile, hand-tracking and voice commands allow surgeons to access any needed information on the spot. In its first set of human trials, MediView has already used its system on five live tumor patients and began a nine-patient trial in August. Leveraging its newly acquired capital, the company further aims to achieve FDA approval by 2021.

Why it’s important: Surgeons around the world are forced to make sense of 2D images for 3D applications. MediView’s technology would eliminate this hurdle and reduce surgeon error in doing so. Personalized 3D visualizations could also be used to educate patients on their conditions in a more intuitive manner. The educational applications of AR extend to medical schools as well, where mapping real data into practice procedures could boost student engagement and learning. The success of tumor removal surgeries is largely dependent on how precisely surgeons can incise the tumor, ensuring no cancerous traces are left behind. As AR headsets grow increasingly sophisticated, precise 3D models (coupled with biomarkers injected in the bloodstream to mark tumor cells) could vastly improve patient outcomes. MediView’s CEO John Black, who has performed over 2,000 surgeries himself, aims to transform the way surgeons interact with real-time data visualizations.

Engineers develop a new way to remove carbon dioxide from air: The process could work on the gas at any concentrations, from power plant emissions to open air.

What it is: Scientists from MIT have developed a new method of extracting carbon dioxide from streams of air or feed gas, even at the far lower concentration levels found in the general atmosphere. The technology essentially works like a large battery: charging when CO2-laden gas passes over its polyanthraquinone-coated electrodes, and discharged when it releases a pure stream of carbon dioxide. Unlike some alternatives, the method requires no large pressure differences or chemical processes and can even supply its own power, courtesy of the discharge effect.

Why it’s important: Most carbon capture technologies require high concentrations of CO2 to work, or considerable energy inputs, such as high pressure differences or heat to run chemical processes. This device works at room temperature and regular pressure. Furthermore, it can generate both electricity and pure CO2 streams, valuable for a range of agricultural use cases, carbonation in beverages, and various other applications. Of course, the real benefit of scaling such a method involves our battle against climate change, where our ability to scrub the air of carbon dioxide could be a critical step in reversing environmental catastrophe.

Scientists Demonstrate Direct Brain-to-Brain Communication in Humans.

What it is: For the first time, humans have achieved direct brain-to-brain communication through non-invasive electroencephalographs (EEGs). In a newly published study, three subjects were tasked with orienting a block correctly in a video game. Two subjects in separate rooms were designated as “senders” and could see the block, while the third “receiver” relied solely on sender signals to correctly position the block. EEG signals from the sender brains were converted into magnetic pulses delivered to the receiver via a transcranial magnetic stimulation (TMS) device. If the senders wanted to instruct rotation, for instance, they focused on a high-frequency light flashing, which the receiver would see as a flash of light in her visual field. To stop rotation, senders would focus on a low-frequency light, which the receiver would then interpret as light absence in the set time interval. Using this binary stop/go code, the five groups tested in this “BrainNet” system achieved over 80 percent accuracy in aligning the block.

Why it’s important: A leader in the brain-to-brain communication field, Miguel Nicolelis has previously conducted studies that linked rat brains through implanted electrodes, effectively creating an “organic computer.” The rat brains synchronized electrical activity to the same extent of a single brain, and the super-brain routinely outperformed individual rats in distinguishing two electrical patterns. Building on this research, the leaders of the “BrainNet” human study claim that their non-invasive device could connect a limitless number of individuals. As brain-to-brain signaling grows increasingly complex, human collaboration will reach extraordinary levels, allowing us to uncover novel ideas and thought processes. Rather than building “neural networks” in software, operations like BrainNet are truly linking networks of neurons, creating massive amounts of biological processing power. We are fast approaching the prediction of Nobel Prize-winning physicist Murry Gell-Man, who envisioned “thoughts and feelings would be completely shared with none of the selectivity or deception that language permits.”

By targeting flu-enabling protein, antibody may protect against wide-ranging strains: The findings could lead to a universal flu vaccine and more effective emergency treatments.

What it is: Scientists recently discovered a new antibody that could tremendously catalyze pursuit of a universal flu vaccine. Experimenting on mice, the research team identified an antibody that binds to the protein nueraminidase, an enzyme essential for the influenza virus’ replication inside the body. While today’s most widely used flu drug, Tamiflu, inactivates neuraminidase, various forms of the latter exist, rendering Tamiflu and similar drugs ineffective for numerous different flu strains. Testing the versatility of their newly discovered antibody, however, the scientists administered lethal doses of different flu strains to a dozen mice, only to find that the new antibody protected all twelve from succumbing to infection.

Why it’s important: Now particularly salient, fighting the flu every season has been an ongoing arms race between humanity and the virus. As strains mutate and develop resistance to our existing medications, the need for alternative strategies has become far more pressing. This new research could accelerate our progress towards finally engineering a cure-all method for preventing and protecting against the flu, saving thousands of lives every year.

Elephants Under Attack Have An Unlikely Ally: Artificial Intelligence.

What it is: Researchers at Cornell University and elsewhere have recently started applying AI algorithms to track and save African Forest Elephants. As Forest Elephants have proven difficult to track visually, Cornell researcher Peter Wrege decided to set up microphones and listen for signs of elephant communication amidst rainforest trees. First, Wrege and his team at the Elephant Listening Project divided the rainforest into 25km2 grids. By then placing audio recorders in every grid square about 23 to 30 feet into the treetops, the team has thus collected hundreds of thousands of hours of jungle sounds—more than any human could possibly tag and make sense of. By then transforming these audio files into spectrograms (visual representations of audio files), the researchers could apply a neural network to the data and isolate sounds from individual elephants. In practice, these algorithmic outcomes are now helping park rangers achieve an accurate census of the population, track elephant movement through the park over time, and even proactively prevent poaching activity in the bush.

Why it’s important: AI has now been heavily applied to narrow (and growing) use cases across medicine, financial projecting, logistics, industrial design, navigation, and almost any mechanical or logic-based system you can think of. Yet today, it increasingly stands to help us understand unstructured environments and even animal-to-animal communication. Thanks to a convergence of computing power, sensors, and connectivity, methods such as that used by the Elephant Listening Project are now granting us a better understanding of extraordinarily complex natural ecosystems and species, and could aid in our pursuit to protect them.

First Look: Uber Unveils New Design For Uber Eats Delivery Drone.

What it is: Uber Eats and Uber Elevate will soon be delivering dinner for two via drone starting next summer in San Diego. Unveiled at last week’s Forbes Under 30 Summit in Detroit, the delivery drone design features six rotors, rotating wings, and can carry a meal for two in its body. While the drone’s ideal trip time remains relatively short at eight minutes (including loading and unloading), the drone is capable of up to an 18-mile trip, divided into three six-mile legs (from launch to restaurant, to customer, and back to launch area). The current plan involves flying from restaurants to a staging location, at which an Uber driver would then travel the last mile for hand-off to the consumer. Yet with an eye to the future of automated last-mile delivery, Uber is also considering landing drones on the roofs of delivery cars.

Why it’s important: Less than a year away from Uber Eats’ expected launch in San Diego airspace, we will soon begin to witness the commercialization of autonomous drones in everything from last-mile delivery to humanitarian aid. Not only are these trends slated to displace a significant percentage of cargo-related transit but will fundamentally alter our urban networks and the way tomorrow’s businesses deliver personalized services.

Want more conversations like this?

Abundance 360 is a curated global community of 360 entrepreneurs, executives, and investors committed to understanding and leveraging exponential technologies to transform their businesses. A 3-day mastermind at the start of each year gives members information, insights and implementation tools to learn what technologies are going from deceptive to disruptive and are converging to create new business opportunities. To learn more and apply, visit A360.com

Abundance Digital, a Singularity University program, is an online educational portal and community of abundance-minded entrepreneurs. You’ll find weekly video updates from Peter, a curated news feed of exponential news, and a place to share your bold ideas. Click here to learn more and sign up.

Know someone who would benefit from getting Abundance Insider? Send them to this link to sign up.

(*Both Abundance 360 and Abundance Digital are Singularity University programs.)

Topics: Abundance Insider AR/VR AI health surgery bci elephants
7 min read

VR’s leap into the disruptive phase

By Peter H. Diamandis on Oct 23, 2019

In 2016, venture investments in VR exceeded US$800 million, while AR and MR received a total of $450 million. Just a year later, investments in AR and VR startups doubled to US$3.6 billion.

Topics: AR/VR virtual reality
5 min read

The Future is Faster Than You Think

By Peter H. Diamandis on Oct 6, 2019

Over the next three months, I am beyond excited to give you a sneak peek into my new upcoming book, The Future is Faster Than You Think!

Topics: AR/VR Augmented Reality augmented manufacturing ar cloud
7 min read

augmented reality worlds: implications & opportunity

By Peter H. Diamandis on Sep 29, 2019

How do you want to see the world? As an on-going game? A constant shopping extravaganza? A classroom that spans the planet and never stops teaching? How the Earth appeared 100 years ago?

Topics: AR/VR Augmented Reality augmented manufacturing ar cloud
9 min read

How AR, AI, Sensors & Blockchain are Converging Into Web 3.0

By Peter H. Diamandis on Sep 15, 2019

How each of us sees the world is about to change dramatically…

Topics: AR/VR Exponentials Augmented Reality intelligent hardware hardware augmented manufacturing Mojo Vision Microsoft HoloLens Magic Leap
9 min read

Augmented Reality Part 2 - Apps & Hardware

By Peter H. Diamandis on Sep 8, 2019

Today, adults in the U.S. spend over nine hours a day looking at screens. That counts for more than a third of our livelihoods.

Yet even though they serve as a portal to 90 percent of our media consumption, screens continue to define and constrain how and where we consume content, and they may very soon become obsolete.

Riding new advancements in hardware and connectivity, augmented reality (AR) is set to replace these 2D interfaces, instead allowing us to see through a digital information layer. And ultimately, AR headsets will immerse us in dynamic stories, learn-everywhere education, and even gamified work tasks.

If you want to play AR Star Wars, you’re battling the Empire on your way to work, in your cubicle, cafeteria, bathroom and beyond.

We got our first taste of AR’s real-world gamification in 2016, when Nintendo released Pokemon Go. Thus began the greatest cartoon character turkey shoot in history. With 5 million daily users, 65 million monthly users, and over $2 billion in revenue, the virtual-overlaid experience remains one for the books.

In the years since, similar AR apps have exploded. Once thick and bulky, AR glasses are becoming increasingly lightweight, stylish, and unobtrusive. And over the next 15 years, AR portals will become almost unnoticeable, as hardware rapidly dematerializes.

Companies like Mojo Vision are even rumored to be developing AR contact lenses, slated to offer us heads-up display capabilities — no glasses required.

In this second installation of our five-part AR blog series, we are doing a deep dive into the various apps, headsets, and lenses on the market today, along with projected growth.

Let’s take a look…

Mobile AR

We have already begun to sample AR’s extraordinary functions through mobile (smartphone) apps. And the growth of the market is only accelerating.

Snap recently announced it will raise $1 billion in short-term debt to invest in media content, acquisitions, and AR features. Both Apple and Google are racing to deploy phones with requisite infrastructure to support hyper-realistic AR.

And in the iOS space, developers use ARKit in iPhone software, from the SE to the latest-generation X, to bring high-definition AR experiences to life. Apple CEO Tim Cook has repeatedly emphasized his belief that AR will “change the way we use technology forever.”

While recent rumors reveal the company’s AR glasses project has been discontinued, Apple’s foray in AR is far from over. Just recently, the tech giant broadcasted a large collection of job postings for AR and VR experts. And although somewhat speculative, Apple is likely waiting for the consumer market to mature before releasing its first-generation AR glasses or pivoting towards an entirely new AR hardware product.

For now, Apple seems to be promoting the extensive hardware advancements showcased by its A12 bionic chip, not to mention the variety of apps available in its App Store.

  • In the productivity realm: IKEA place allows users to try out furniture in the home, experimenting with styles and sizing before ordering online. Or take Vuforia Chalk, a novel AR tool that helps customers fix appliances with real-time virtual assistance. As users direct their smartphone cameras towards troublesome appliances, remote tech support workers can draw on consumers’ screens to guide them through repair steps.
  • As to the AR playground, Monster Park brings Jurassic Park dinosaurs into any landscape you desire, immersing you in a modern-day Mesozoic Era. Meanwhile, Dance Reality can guide you through detailed steps and timing of countless dance styles.
  • In virtually immersive learning, BBC’s Civilisations lets you hold, spin, and view x-rays of ancient artifacts while listening to historical narrations. WWF’s Free Rivers transforms your tabletop into natural landscapes, from the Himalayas to the African Sahara, allowing you to digitally manipulate entire ecosystems to better understand how water flow affects habitats.
  • Or even create your own DIY AR worlds and objects using Thyng.

Yet for Android users, options are just as varied, based on the Android software-compatible ARCore used by developers. While the recently announced Google Glass Enterprise Edition 2 aims to capture enterprise clients, Android smartphone hardware provides remarkable AR experiences for everyday consumers.

  • For sheer doodling, DoodleLens brings your doodles to life, transforming paper drawings into 3D animated figures that you can place and manipulate in your physical environment. And even more directly, Just a Line allows anyone to create a 3D drawing within their physical surroundings, making space itself an endless canvas.
  • Learn as you travel: Google Translate can now take an image of any foreign street sign, menu, or label and provide instantaneous translation in real time. And beyond Earth-bound adventures, the now open-sourced Sky Map guides you through constellations across the night sky.
  • Even alter your own body with Inkhunter, which allows users to preview any potential tattoo design on their skin. Or as is familiar to most younger folks, change your look with Snapchat’s computer vision-derived filters, which have already reached 90 percent of 12-to 24-year-olds in the U.S.

Leading Headsets

Although the number of AR headsets breaking into the market may seem overwhelming, a few of the top contenders are now pushing the envelope in everything from wide FOV immersion to applications in enterprise.

(1) Highest Resolution

DreamGlass: Connected to a PC or Android-based smartphone, DreamWorld’s headset offers 2.5K resolution in each lens, beating out Full HD resolution screens, but in AR. Now flooded by investment, resolution improvements minimize pixel size, reducing the “screen door effect,” whereby pixel boundaries disrupt the image like a screen’s mesh. Offering unprecedented levels of hand- and head-tracking precision, the headset even features 6 degrees of freedom (i.e. axes of directional rotation).

And with a flexible software development kit (SDK), supported by Unity and Android, the device is highly accessible to developers, making it a ready candidate for countless immersive experiences. Already at $619, the DreamGlass and comparable technology are only falling in price.

(2) Best for Enterprise

Google Glass Enterprise Edition 2: In just four years (since Google’s release of the last iteration), the Google Glass has gotten a major upgrade, now geared with an 8-megapixel camera, detachable lens, vastly increased battery life, faster connection, and ultra-high-performance Snapdragon XR1 CPU. Already, the Glass has been sold to over 100 businesses, including GE, agricultural machinery manufacturer AGCO, and health record company Dignity Health.

But perhaps most remarkable are the bucks AR can make for business. Using the Glass, GE has increased productivity by 25 percent, and DHL improved its supply chain efficiency by 15 percent. While only (currently) available for businesses, the new-and-improved AR glasses stand at $999 and will continue to ride plummeting production costs.

(3) Democratized AR

Vuzix Blade: Resembling chunky Oakley sunglasses, these smart glasses are extraordinarily portable, with a built-in Android OS as well as both WiFi and Bluetooth connection. Designed for everyday consumer use (at a price point of $700), the Vuzix Blade is slowly chipping away at smartphone functionalities. For easy control of an intuitive interface, a touchpad on the device’s temples allows consumers to display everything from social media platforms and user messages to “light AR” experiences. Meanwhile, an 8MP HD camera makes your phone camera null and void, allowing users to remain immersed in their experience while digitally capturing it at the same time. All the while, built-in Alexa capabilities and vibration alerts extend users’ experience beyond pure visual stimulation.

(4) Widest Field of View (FOV)

Microsoft HoloLens 2: This newly announced headset leads the industry with a 43° x 29° FOV, more than double its (2016-released) predecessor’s capability. But this drastic increase in visual immersiveness is far from the only device improvement. For improved long-use comfort, the headset’s center of gravity now rests on the top of the head, moving away from typical front-loaded headsets.

An even more novel functionality, tiny cameras on the nose bridge verify a user’s identity by scanning the wearer’s eyes and customizing the display based on distance between pupils. Once accompanied by emotion-deducing AIs (now under development), this tracking technology could even evolve to intuitively predict a user’s desires and emotional feedback in future models. Geared with a Qualcomm 850 mobile processor and Microsoft's own AI engine built-in, Hololens’ potential is limitless.


(5) Class A Comfort

Magic Leap One: Weighing less than 0.8 pounds, this headset provides one of the most lightweight experiences available today with a 40° x 30° FOV, just barely eclipsed by that of Microsoft’s HoloLens 2. En route to dematerialization, Magic Leap merely requires a small “Lightpack” attachment in the wearer’s pocket, connected via cable to the goggles. A handheld controller additionally contains a touchpad, haptic feedback, and six degrees of freedom motion sensing. Meanwhile, light sensors make the digital renderings even more realistic, as they reflect physical light into the viewer’s space.

Teasing AR’s future convergence with AI, Magic Leap even features a virtual human called “Mica,” which responds to a user’s emotions (detected through eye-tracking) by returning a smile or offering a friendly gesture.

Final Thoughts

As headsets plummet in price and size, AR will rapidly permeate households over the next decade.

Once we have mastered headsets and smart glasses, AR-enabled contact lenses will make our virtually enhanced world second nature.

And ultimately, BCIs will directly interface with our neural signals to provide an instantaneous, seamlessly intuitive connection, merging our minds with limitless troves of knowledge, rich human connection, and never-before-possible experiences.

While only approaching the knee of the curve, these pioneering mobile apps and novel headset technologies explored above will soon give rise to one of the most revolutionary industries yet to be seen— one that will fundamentally transform our lives.

Join Me

(1) A360 Executive Mastermind: Want even more context about how converging exponential technologies will transform your business and industry? Consider joining Abundance 360, my highly selective community of 360 exponentially minded CEOs, who are on a 25-year journey with me — or as I call it, a “countdown to the Singularity." If you’d like to learn more and consider joining our 2020 membership, apply here.

Share this with your friends, especially if they are interested in any of the areas outlined above.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

Topics: AR/VR Exponentials Augmented Reality intelligent hardware hardware augmented manufacturing Mojo Vision Microsoft HoloLens Magic Leap
8 min read

The Augmented World of 2030 - Part 1

By Peter H. Diamandis on Sep 1, 2019

Augmented Reality is about to add a digital intelligence layer to our every surrounding, transforming retail, manufacturing, education, tourism, real estate, and almost every major industry that holds up our economy today.

Topics: Education Energy Abundance AR/VR Exponentials healthcare Augmented Reality intelligent hardware hardware augmented manufacturing Mojo Vision Microsoft HoloLens Magic Leap
12 min read

Abundance Insider: August 9th, 2019

By Peter H. Diamandis on Aug 9, 2019

In this week's Abundance Insider: Samsung's 'smart' contact lenses, gamified tree-planting, and this week's virtual conference experiment.

P.S. Send any tips to our team by clicking here, and send your friends and family to this link to subscribe to Abundance Insider.

P.P.S. Want to learn more about exponential technologies and home in on your MTP/ Moonshot? Abundance Digital, a Singularity University Program, includes 100+ hours of coursework and video archives for entrepreneurs like you. Keep up to date on exponential news and get feedback on your boldest ideas from an experienced, supportive community. Click here to learn more and sign up.

One chip to rule them all: It natively runs all types of AI software

What it is: A team of researchers primarily based in Beijing has developed a hybrid chip that can natively run all types of AI software. Dubbed Tianjic, the chip has been engineered to combine two distinct architectural approaches to AI (machine learning and artificial neural networks), which each require fundamentally different coding schemes. In effect, Tianjic’s processing units can shift between spiking communications and binary, allowing it to perform a broad range of calculations. To demonstrate Tianjic’s versatility, the team even built an autonomous Tianjic-operated bicycle, which could successfully detect and avoid obstacles, maintain balance, perform voice command recognition, make navigation decisions under varying road conditions, and run conventional software to boot.

Why it’s important: While sometimes conflated under the umbrella term AI, machine learning and artificial neural networks have developed along different branches and enable distinct types of calculations. For this reason, today’s field is considered one of Artificial Narrow Intelligence, as most contemporary AIs are “super-intelligent” within the constraints of highly specialized problems, like pattern recognition or strategy games. However, by combining distinct AI architectures in a single chip, Tianjic and its future successors might be the vanguards of Artificial General Intelligence (AGI), birthing multi-skilled machines geared to tackle any computation problem, motor skill, or pattern analysis. | Share on Facebook.
 

How Alipay Users Planted 100M Trees In China

What it is: Alibaba’s Alipay (one of China’s two dominant mobile payment platforms) has enabled users to plant 100 million trees to date via its “Ant Forest” mini-program. Since the program’s launch in 2016, over 500 million Alipay users have joined, earning “green energy” points in exchange for eco-friendly decisions, such as walking to work, using Dingtalk to hold video conferences (instead of commuting to meetings), or recycling old possessions on Alibaba’s secondhand marketplace Idle Fish. Trackable through leaderboards, these green energy points can then be used to plant trees in China’s most arid regions. So far, Alipay’s partner NGOs have revegetated 933 square kilometers of land — the rough equivalent of 130,000 soccer fields. Alipay even allows users to track satellite images of their trees in real-time and collaborate with friends.

Why it’s important: Announced in 1978, China’s “Green Great Wall” project aims to plant 400 million hectares of new forests (spanning 42 percent of China’s landmass) by 2050. Alipay’s ‘crowd’-planted trees not only comprise a growing carbon sink, offsetting China’s high emissions, but also aid in building this 4,500-kilometer ecological barrier to combat land degradation. Over the past 20 years, China and India have contributed one-third of the planet’s increased foliage, and crowd-leveraging programs like Ant Forest are fast reducing the barrier to participation. By gamifying “green” behavior and offering real-world prizes, mobile platforms hold an extraordinary power to incentivize sustainable decision-making, reshape communal mindsets, and catalyze climate solutions. | Share on Facebook.

This Week’s Virtual Conference In VirBELA

What it is: Early this week, Peter Diamandis and the Abundance Digital team partnered with virtual coworking company VirBELA to run an immersive, virtual conference experiment. Uniting over 100 participants from around the world, the summit featured embodied avatar speakers (including a keynote by Peter and exclusive XPRIZE updates), an interactive auditorium, and social recreational activities — from boat tours to seaside group conversations. Iterating on its software for next-generation remote collaboration, VirBELA strives to dematerialize and democratize the traditional office, allowing anyone to engage in team projects regardless of geography. Currently, VirBELA’s software is home to eXp Realty, a $620 million+ real estate company with over 20,000 agents and zero staffed, physical offices.

Why it’s important: The future of work — and social interaction, for that matter — will soon make physical distance immaterial. As virtual reality hardware and low latency rendering improve dramatically in the coming years, digital and delocalized work environments will begin to decimate travel costs, company carbon footprints, and wasted time. Validated this week, people can increasingly experience all the benefits of conventional conferences from the convenience of a living room, at zero cost. Perhaps even more exciting, platforms like VirBELA are vastly enhancing the accessibility of today’s brightest minds, industry leaders, and cutting edge content. 

Researchers Say This AI Can Spot Unsafe Food On Amazon Faster Than The FDA

What it is: Researchers at Boston University School of Public Health have successfully trained an AI to spot unsafe food items potentially in need of recall. Aggregating nearly 1.3 million Amazon food product reviews, the team’s neural network found matches between a subset of these products and prior U.S. FDA-recalled items. The researchers’ deep learning AI, a Bidirectional Encoder Representation from Transformations (dubbed BERT), was then taught to identify language in online reviews that could confirm a food’s safety status and aid in risk stratification. Using crowd-sorted reviews, BERT AI consequently distinguished which food products had been officially FDA-recalled with 74 percent accuracy, and even managed to predict a similar fate for 20,000 additional products, now candidates for recall.

Why it’s important: Predicting and mitigating risk before losses are incurred is one of the most profitable business opportunities of the next decade. Leveraging e-commerce data, BERT’s ability to scour massive databases and classify products by risk serves as a prime example, unlocking countless implications. Regulatory processes (think: FDA recalls) can now become much more efficient as products are instantaneously flagged, bypassing a recall’s costly research phase. Within supply chain monitoring, AIs might continuously analyze real-time employee and user feedback to identify supply chain bottlenecks and inefficiencies. For end consumers, future iterations of BERT could even crowdsource decisions for quality control and assurance, as well as concise supplier feedback. Knowing your customer and listening to the data will never have been easier. | Share on Facebook.

Samsung's Patented ‘Smart’ Contact Lenses

What it is: Samsung has just been granted a U.S. patent to develop smart contact lenses capable of streaming text, capturing videos, and even beaming images directly into a wearer’s eyes. Given their multi-layered lens architecture, the contacts are designed to include a motion sensor (for eye movement tracking), hidden camera, and display unit. Current lens designs would even theoretically allow users to control their devices remotely, possibly administering commands by blinking or navigating a user interface with eye movements alone.

Why it’s important: While still immersed in the R&D phase, smart contact lenses are projected to comprise a $7.2 billion market by 2023. Perhaps one of the most promising candidates for a future of ubiquitous augmented reality, smart lenses are also increasingly feasible thanks to advances in sensor technology. Riding implications of Moore’s Law, smart sensors (and what some have dubbed “smart dust”) have shrunk dramatically in size, and could one day record and transmit everything from lens-wearers’ audiovisual experiences to auto-populated contextual information. Keep an eye out (no pun intended) for Google’s response as it works on its own smart lens revamp of the Google Glass. | Share on Facebook.

Tokyo Offers $1 Billion Research Grant For Human Augmentation, Cyborg Tech

What it is: The Japanese government has just set aside roughly 100 billion yen (or $921 million) to fund projects spanning cyborg technologies, industrial waste solutions, and augmentation for aging individuals. Planning to fund teams for the first 5 years of a 10-year support agreement, Tokyo will soon invite researchers and academics (both domestic and international) to submit proposals in 25 key problem areas. One source reports that portions of the research grant will be channeled towards "cyborg technology that can replace human bodily functions using robotics or living organisms by 2050." In light of a declining birth rate and a shrinking workforce to follow, Japan might rely heavily on such solutions to bolster economic productivity.

Why it’s important: Similar to the U.S. government's $2.5 billion+ SBIR/STTR grant and seed funding program, Japan’s government is issuing a powerful clarion call to private industry and academics: an invitation to not only tackle some of the nation’s most pressing challenges, but to invest in long-term, experimental technologies set for commercialization between 2025 and 2060. As OECD nations begin to witness a dwindling birth rate, resulting labor shortages will require converging advancements in AI, robotics, and additional human augmentation technologies. Whether in pursuit of longevity extension or cyborg construction, Japan’s initiative might soon birth solutions that allow us to work longer or replace certain human labor altogether. | Share on Facebook.

What is Abundance Insider?

This email is a briefing of the week's most compelling, abundance-enabling tech developments, curated by my team of entrepreneurs and technology scouts, including contributions from standout technology experts and innovators.

Want more conversations like this?

At Abundance 360, a Singularity University program, we teach the metatrends, implications and unfair advantages for entrepreneurs enabled by breakthroughs like those featured above. We're looking for CEOs and entrepreneurs who want to change the world. The program is highly selective. If you'd like to be considered, apply here

Abundance Digital, a Singularity University program, is an online educational portal and community of abundance-minded entrepreneurs. You’ll find weekly video updates from Peter, a curated newsfeed of exponential news, and a place to share your bold ideas. Click here to learn more and sign up.

Know someone who would benefit from getting Abundance Insider? Send them to this link to sign up.

Topics: Abundance Insider Future of Work AR/VR AI food Artificial Intellegence virtual reality environment capital Augmented Reality China computation future of food
11 min read

Disrupting Real Estate & Construction

By Peter H. Diamandis on May 26, 2019

In the wake of the housing market collapse of 2008, one entrepreneur decided to dive right into the failing real estate industry. But this time, he didn’t buy any real estate to begin with. Instead, Glenn Sanford decided to launch the first-ever cloud-based real estate brokerage, eXp Realty.

Topics: Energy Abundance Materials Science AR/VR Transportation Abundance 360 Real Estate a360 virtual reality Autonomous Drones materials autonomous vehicles construction flying cars electric vehicles immersive worlds solar cells solar power cars ridesharing future of real estate future of construction new structures seasteading Boring Company floating cities future of cities
8 min read

Convergence in VR/AR: 5 Anticipated Breakthroughs to Watch

By Peter H. Diamandis on May 5, 2019

Convergence is accelerating disruption… everywhere!

Topics: Abundance AR/VR Abundance 360 a360 virtual reality Augmented Reality film entertainment augmented manufacturing convergence catalyzer convergence immersive worlds movies edutainment
8 min read

Future of Entert[AI]nment - Part 1

By Peter H. Diamandis on Apr 28, 2019

Twenty years ago, entertainment was dominated by a handful of producers and monolithic broadcasters, a near-impossible market to break into.

Topics: Abundance AR/VR AI Abundance 360 a360 machine learning Artificial Intellegence film entertainment convergence catalyzer convergence immersive worlds music movies songwriting
8 min read

Education at All Ages

By Peter H. Diamandis on Feb 24, 2019

Today, over 77 percent of Americans own a smartphone with access to the world’s information and near-limitless learning resources.

Topics: Education Abundance AR/VR Data AI Artificial Intellegence virtual reality XPRIZE connectivity Web 3.0 Spatial Web adult literacy retooling professional training future of education online education edtech ESL automation Learning Upgrade literacy job market mobile devices mobile learning People ForWords
10 min read

Future of Work, Free Time & Play

By Peter H. Diamandis on Dec 23, 2018

How we work and play is about to transform.

Topics: Future of Work AR/VR virtual reality future Augmented Reality Web 3.0 Spatial Web free time gaming storytelling future of play film