Top Takeaways From The Economist Innovation Summit

Top Takeaways From The Economist Innovation Summit

Over the past few years, the word ‘innovation’ has degenerated into something of a buzzword. In fact, according to Vijay Vaitheeswaran, US business editor at The Economist, it’s one of the most abused words in the English language.

The word is over-used precisely because we’re living in a great age of invention. But the pace at which those inventions are changing our lives is fast, new, and scary.

So what strategies do companies need to adopt to make sure technology leads to growth that’s not only profitable, but positive? How can business and government best collaborate? Can policymakers regulate the market without suppressing innovation? Which technologies will impact us most, and how soon?

At The Economist Innovation Summit in Chicago last week, entrepreneurs, thought leaders, policymakers, and academics shared their insights on the current state of exponential technologies, and the steps companies and individuals should be taking to ensure a tech-positive future. Here’s their expert take on the tech and trends shaping the future.


There’s been a lot of hype around blockchain; apparently it can be used for everything from distributing aid to refugees to voting. However, it’s too often conflated with cryptocurrencies like Bitcoin, and we haven’t heard of many use cases. Where does the technology currently stand?

Julie Sweet, chief executive of Accenture North America, emphasized that the technology is still in its infancy. “Everything we see today are pilots,” she said. The most promising of these pilots are taking place across three different areas: supply chain, identity, and financial services.

When you buy something from outside the US, Sweet explained, it goes through about 80 different parties. 70 percent of the relevant data is replicated and is prone to error, with paper-based documents often to blame. Blockchain is providing a secure way to eliminate paper in supply chains, upping accuracy and cutting costs in the process.

One of the most prominent use cases in the US is Walmart—the company has mandated that all suppliers in its leafy greens segment be on a blockchain, and its food safety has improved as a result.

Beth Devin, head of Citi Ventures’ innovation network, added “Blockchain is an infrastructure technology. It can be leveraged in a lot of ways. There’s so much opportunity to create new types of assets and securities that aren’t accessible to people today. But there’s a lot to figure out around governance.”

Open Source Technology

Are the days of proprietary technology numbered? More and more companies and individuals are making their source code publicly available, and its benefits are thus more widespread than ever before. But what are the limitations and challenges of open source tech, and where might it go in the near future?

Bob Lord, senior VP of cognitive applications at IBM, is a believer. “Open-sourcing technology helps innovation occur, and it’s a fundamental basis for creating great technology solutions for the world,” he said. However, the biggest challenge for open source right now is that companies are taking out more than they’re contributing back to the open-source world. Lord pointed out that IBM has a rule about how many lines of code employees take out relative to how many lines they put in.

Another challenge area is open governance; blockchain by its very nature should be transparent and decentralized, with multiple parties making decisions and being held accountable. “We have to embrace open governance at the same time that we’re contributing,” Lord said. He advocated for a hybrid-cloud environment where people can access public and private data and bring it together.

Augmented and Virtual Reality

Augmented and virtual reality aren’t just for fun and games anymore, and they’ll be even less so in the near future. According to Pearly Chen, vice president at HTC, they’ll also go from being two different things to being one and the same. “AR overlays digital information on top of the real world, and VR transports you to a different world,” she said. “In the near future we will not need to delineate between these two activities; AR and VR will come together naturally, and will change everything we do as we know it today.”

For that to happen, we’ll need a more ergonomically friendly device than we have today for interacting with this technology. “Whenever we use tech today, we’re multitasking,” said product designer and futurist Jody Medich. “When you’re using GPS, you’re trying to navigate in the real world and also manage this screen. Constant task-switching is killing our brain’s ability to think.” Augmented and virtual reality, she believes, will allow us to adapt technology to match our brain’s functionality.

This all sounds like a lot of fun for uses like gaming and entertainment, but what about practical applications?  “Ultimately what we care about is how this technology will improve lives,” Chen said.

A few ways that could happen? Extended reality will be used to simulate hazardous real-life scenarios, reduce the time and resources needed to bring a product to market, train healthcare professionals (such as surgeons), or provide therapies for patients—not to mention education. “Think about the possibilities for children to learn about history, science, or math in ways they can’t today,” Chen said.

Quantum Computing

If there’s one technology that’s truly baffling, it’s quantum computing. Qubits, entanglement, quantum states—it’s hard to wrap our heads around these concepts, but they hold great promise. Where is the tech right now?

Mandy Birch, head of engineering strategy at Rigetti Computing, thinks quantum development is starting slowly but will accelerate quickly. “We’re at the innovation stage right now, trying to match this capability to useful applications,” she said. “Can we solve problems cheaper, better, and faster than classical computers can do?” She believes quantum’s first breakthrough will happen in two to five years, and that is highest potential is in applications like routing, supply chain, and risk optimization, followed by quantum chemistry (for materials science and medicine) and machine learning.

David Awschalom, director of the Chicago Quantum Exchange and senior scientist at Argonne National Laboratory, believes quantum communication and quantum sensing will become a reality in three to seven years. “We’ll use states of matter to encrypt information in ways that are completely secure,” he said. A quantum voting system, currently being prototyped, is one application.

Who should be driving quantum tech development? The panelists emphasized that no one entity will get very far alone. “Advancing quantum tech will require collaboration not only between business, academia, and government, but between nations,” said Linda Sapochak, division director of materials research at the National Science Foundation. She added that this doesn’t just go for the technology itself—setting up the infrastructure for quantum will be a big challenge as well.


Space has always been the final frontier, and it still is—but it’s not quite as far-removed from our daily lives now as it was when Neil Armstrong walked on the moon in 1969.

The space industry has always been funded by governments and private defense contractors. But in 2009, SpaceX launched its first commercial satellite, and in subsequent years have drastically cut the cost of spaceflight. More importantly, they published their pricing, which brought transparency to a market that hadn’t seen it before.

Entrepreneurs around the world started putting together business plans, and there are now over 400 privately-funded space companies, many with consumer applications.

Chad Anderson, CEO of Space Angels and managing partner of Space Capital, pointed out that the technology floating around in space was, until recently, archaic. “A few NASA engineers saw they had more computing power in their phone than there was in satellites,” he said. “So they thought, ‘why don’t we just fly an iPhone?’” They did—and it worked.

Now companies have networks of satellites monitoring the whole planet, producing a huge amount of data that’s valuable for countless applications like agriculture, shipping, and observation. “A lot of people underestimate space,” Anderson said. “It’s already enabling our modern global marketplace.”

Next up in the space realm, he predicts, are mining and tourism.

Artificial Intelligence and the Future of Work

From the US to Europe to Asia, alarms are sounding about AI taking our jobs. What will be left for humans to do once machines can do everything—and do it better?

These fears may be unfounded, though, and are certainly exaggerated. It’s undeniable that AI and automation are changing the employment landscape (not to mention the way companies do business and the way we live our lives), but if we build these tools the right way, they’ll bring more good than harm, and more productivity than obsolescence.

Accenture’s Julie Sweet emphasized that AI alone is not what’s disrupting business and employment. Rather, it’s what she called the “triple A”: automation, analytics, and artificial intelligence. But even this fear-inducing trifecta of terms doesn’t spell doom, for workers or for companies. Accenture has automated 40,000 jobs—and hasn’t fired anyone in the process. Instead, they’ve trained and up-skilled people. The most important drivers to scale this, Sweet said, are a commitment by companies and government support (such as tax credits).

Imbuing AI with the best of human values will also be critical to its impact on our future. Tracy Frey, Google Cloud AI’s director of strategy, cited the company’s set of seven AI principles. “What’s important is the governance process that’s put in place to support those principles,” she said. “You can’t make macro decisions when you have technology that can be applied in many different ways.”

High Risks, High Stakes

This year, Vaitheeswaran said, 50 percent of the world’s population will have internet access (he added that he’s disappointed that percentage isn’t higher given the proliferation of smartphones). As technology becomes more widely available to people around the world and its influence grows even more, what are the biggest risks we should be monitoring and controlling?

Information integrity—being able to tell what’s real from what’s fake—is a crucial one. “We’re increasingly operating in siloed realities,” said Renee DiResta, director of research at New Knowledge and head of policy at Data for Democracy. “Inadvertent algorithmic amplification on social media elevates certain perspectives—what does that do to us as a society?”

Algorithms have also already been proven to perpetuate the bias of the people who create it—and those people are often wealthy, white, and male. Ensuring that technology doesn’t propagate unfair bias will be crucial to its ability to serve a diverse population, and to keep societies from becoming further polarized and inequitable. The polarization of experience that results from pronounced inequalities within countries, Vaitheeswaran pointed out, can end up undermining democracy.

We’ll also need to walk the line between privacy and utility very carefully. As Dan Wagner, founder of Civis Analytics put it, “We want to ensure privacy as much as possible, but open access to information helps us achieve important social good.” Medicine in the US has been hampered by privacy laws; if, for example, we had more data about biomarkers around cancer, we could provide more accurate predictions and ultimately better healthcare.

But going the Chinese way—a total lack of privacy—is likely not the answer, either. “We have to be very careful about the way we bake rights and freedom into our technology,” said Alex Gladstein, chief strategy officer at Human Rights Foundation.

Technology’s risks are clearly as fraught as its potential is promising. As Gary Shapiro, chief executive of the Consumer Technology Association, put it, “Everything we’ve talked about today is simply a tool, and can be used for good or bad.”

The decisions we’re making now, at every level—from the engineers writing algorithms, to the legislators writing laws, to the teenagers writing clever Instagram captions—will determine where on the spectrum we end up.

Image Credit: Rudy Balasko /


via Singularity Hub

March 14, 2019 at 11:01PM

How celebrities have fuelled the amazing rise in pseudoscience

How celebrities have fuelled the amazing rise in pseudoscience


Cool amusement: will cryotherapy and other treatments help Timothy Caulfield live forever? Probably not

Peacock Alley Entertainment

By Wendy Glauser

FOR the past decade, Timothy Caulfield, a professor of health law in Alberta, Canada, has been waging war on pseudoscience. He has written books on vaccination myths and about our uncritical relationship to medicine, most famously in Is Gwyneth Paltrow Wrong About Everything?

He is big on Twitter, and now on television, too. Each episode of his series A User’s Guide to Cheating Death delves into the ways people are trying to live longer or look younger, either through …


via New Scientist – Health

March 10, 2019 at 05:35PM

OpenAI’s Eerily Realistic New Text Generator Writes Like a Human

OpenAI’s Eerily Realistic New Text Generator Writes Like a Human

Trying to understand how new technologies will shape our lives is an exercise in managing hype. When technologists say their new invention has the potential to change the world, you’d hardly expect them to say anything else. But when they say they’re so concerned about its potential to change the world that they won’t release their invention, you sit up and pay attention.

This was the case when OpenAI, the non-profit founded in 2015 by Y Combinator’s Sam Altman and Elon Musk (amongst others), announced its new neural network for natural language processing: the GPT-2. In a blog post, along with some striking examples of its work, OpenAI announced that this neural network would not be released to the public, citing concerns around its security.

More Data, Better Data

In its outline, GPT-2 resembles the strategy that natural language processing neural networks have often employed: trained on a huge 40GB text sample drawn from the internet, the neural network statistically associates words and patterns of words with each other. It can then attempt to predict the next word in a sequence based on previous words, generating samples of new text. So far, so familiar: people have marveled at the ability of neural networks to generate text for some years. They’ve been trained to write novels and come up with recipes for our amusement.

But GPT-2 appears to be a step ahead of its predecessors. It’s not entirely clear why, in part due to the refusal to release the whole model; but it appears to simply represent a scaling-up of previous OpenAI efforts, using a neural network design that has existed for a couple of years. That means more CPU hours, more fine-tuning, and a larger training dataset.

The data is scraped from the internet, but with a twist: the researchers kept the quality high by scraping from outbound links from Reddit that got more than three upvotes—so if you’re a Reddit user, you helped GPT-2 find and clean its data.

The work of previous RNNs (Recurrent Neural Networks) often felt as if the vast samples of classic literature, or death metal band names, or Shakespeare, had been put through a blender then hastily reassembled by someone who’d only glanced at the original.

This is why talking to AI chatbots can be so frustrating; they cannot retain context, because they have no innate understanding of anything they’re talking about beyond these statistical associations between words.

GPT-2 operates on similar principles: it has no real understanding of what it’s talking about, or of any word or concept as anything more than a vector in a huge vector space, vastly distant from some and intimately close to others. But, for certain purposes, this might not matter.


When prompted to write about unicorns that could speak English, GPT-2 (admittedly, after ten attempts) came up with a page of text like this:

“Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

“While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

“However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution or at least a change in social organization,” said the scientist.”

What’s really notable about this sample is the overarching structure of it: it reads almost exactly as a normal scientific article or write-up of a press release would. The RNN doesn’t contradict itself or lose its flow in the middle of a sentence. Its references to location are consistent, as are the particular “topics” of discussion in each paragraph. GTP-2 is not explicitly programmed to remember (or invent) Dr. Perez’s name, for example—yet it does.

The unicorn sample is a particularly striking example, but the RNN’s capabilities also allowed it to produce a fairly convincing article about itself. With no real understanding of the underlying concepts or facts of the matter, the piece has the ring of tech journalism, but is entirely untrue (thankfully, otherwise I’d be out of a job already).

The OpenAI researchers note that, like all neural networks, the computational resources used to train the network and the size of its sample determine its performance. OpenAI’s blog post explains: “When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50 percent of the time.”

Rewriting the World

However, when trained on specifically-selected datasets for narrower applications, the AI becomes more convincing. An example of the niche applications the OpenAI researchers trained the model to perform was writing Amazon reviews. This kind of convincing generation of online content was what led OpenAI to decide not to release the algorithm for general use.

This decision has been controversial, with some cynics suggesting that it’s a publicity stunt designed to get more articles written to overhype OpenAI’s progress. But there’s no need for an algorithm to be particularly intelligent to shape the world—as long as it’s capable of fooling people.

Deepfake videos, especially in these polarized times, could be disruptive enough, but the complexity of a video can make it easier to spot the “artefacts,” the fingerprints left by the algorithms that generate them.

Not so with text. If GPT-2 can generate endless, coherent, and convincing fake news or propaganda bots online, it will do more than put some Macedonian teens out of a job. Clearly, there is space for remarkable improvements: could AI write articles, novels, or poetry that some people prefer to read?

The long-term impacts on society for such a system are difficult to comprehend. The time is well overdue that the machine learning field abandon its ‘move fast and break things’ approach in releasing algorithms that have potentially damaging social impacts. An ethical debate about the software we release is just as important as ethical debates about new advances in biotechnology or weapons manufacture.

GPT-2 hasn’t yet eliminated some of the perennial bugbears associated with RNNs. Occasionally, for example, it will repeat words, unnaturally switch topics, or say things that don’t make sense due to poor word modeling: “The fire is happening under water,” for example.

Unreasonable Reason

Yet one of the most exciting aspects of the RNN is its apparent ability to develop what you might call “emergent skills” that weren’t specifically programmed. The algorithm was never explicitly programmed to translate between languages or summarize longer articles, but can have a decent stab at both tasks simply based on the enormity of its training dataset.

In that dataset were plenty of examples of long pieces of text, followed by “TL;DR.” If you prompt GPT-2 with the phrase “TL;DR”, it will attempt to summarize the preceding text. It was not designed for this task, and so it’s a pretty terrible summarizer, falling well short of how the best summarizing algorithms can perform.

Yet the fact that it will even attempt this task with no specific training shows just how much behavior, structure, and logic these neural networks can extract from their training datasets. In the endless quest to determine “which-word-comes-next” as a byproduct, it appears to develop a vague notion of what it is supposed to do in this tl;dr situation. This is unexpected, and exciting.

You can download and play with a toy version of GPT-2 from Github.

Image Credit: Photo Kozyr /


via Singularity Hub

March 8, 2019 at 12:13AM

How Three People With HIV Became Virus-Free Without Drugs

How Three People With HIV Became Virus-Free Without Drugs

You’re not entirely human.

Our DNA contains roughly 100,000 pieces of viral DNA, totaling 8 percent of our entire genome. Most are ancient relics from long-forgotten invasions; but to HIV patients, the viral attacks are very real and entirely prescient to every moment of their lives.

HIV is the virus that causes AIDS—the horrifying disease that cruelly eats away at the immune system. As a “retrovirus,” the virus inserts its own genetic material into a cell’s DNA, and hijacks the cell’s own protein-making machinery to spew out copies of itself. It’s the ultimate parasite.

An HIV diagnosis in the 80s was a death sentence; nowadays, thanks to combination therapy—undoubtedly one of medicine’s brightest triumphs—the virus can be kept at bay. That is, until it mutates, evades the drugs, propagates, and strikes again. That’s why doctors never say an HIV patient is “cured,” even if the viral load is undetectable in the blood.

Except for one. Dubbed the “Berlin Patient,” Timothy Ray Brown, an HIV-positive cancer patient, received a total blood stem cell transplant to treat his aggressive blood cancer back in 2008. He came out of the surgery not just free of cancer—but also free of HIV.

Now, two new cases suggest Brown isn’t a medical unicorn. One study, published Tuesday in Nature, followed an HIV-positive patient with Hodgkin’s lymphoma, a white blood cell cancer, for over two years after a bone marrow transplant. The “London patient” remained virus-free for 18 months after quitting his anti-HIV drugs, making him the second person ever to beat back the virus without drugs.

The other, presented at the Conference on Retroviruses and Opportunistic Infections in Washington, also received a stem cell transplant to treat his leukemia while controlling his HIV load using drugs. He stopped anti-virals in November 2018—and doctors only found traces of the virus’s genetic material, even when using a myriad of ultra-sensitive techniques.

Does this mean a cure for HIV is in sight? Here’s what you need to know.

Is There a Cure on the Horizon?

Sadly, no. Stem cell transplant, often in the form of a bone marrow transplant, is swapping one evil out with another. The dangerous surgery requires extensive immunosuppression afterwards and is far too intensive as an everyday treatment, especially because most HIV cases can be managed with antiviral therapy.

Why Did Stem Cell Transplants Treat HIV, Anyways?

The common denominator among the three is that they all received blood stem cell transplants for blood cancer. Warding off HIV was almost a lucky side-effect.

I say “almost” because the type of stem cells the patients received were different than their own. If you picture an HIV virus as an Amazon delivery box, the box needs to dock to the recipient–the cell’s outer surface—before the virus injects its DNA cargo. The docking process involves a bunch of molecules, but CCR5 is a critical one. For roughly 50 percent of all HIV virus strains, CCR5 is absolutely necessary for the virus to get into a type of immune cell called the T cell and kick off its reproduction.

No CCR5, no HIV swarm, no AIDS.

If CCR5 sounds familiar, that may be because it was the target in the CRISPR baby scandal, in which a rogue Chinese scientist edited the receptor in an ill-fated attempt to make a pair of twins immune to HIV (he botched it).

As it happens, roughly 10 percent of northern Europeans carry a mutation in their CCR5 that make them naturally resistant to HIV. The mutant, CCR5 Δ32, lacks a key component that prevents HIV from docking.

Here’s the key: all three seemingly “cured” patients received stem cells from matching donors who naturally had the CCR5 Δ32 to treat their cancer. Once settled into their new hosts, blood stem cells activated and essentially repopulated the entire blood system—immune cells included—with the HIV-resistant super-cells. Hence, bye bye virus.

But Are Mutant Stem Cells Really the Cure?

Here’s where the story gets complicated.

In theory—and it is a good one—lack of full-on CCR5 is why the patients were able to beat back HIV even after withdrawing their anti-viral meds.

But other factors could be at play. Back in the late 2000s, Brown underwent extensive full-body radiation to eradicate his cancerous cells, and received two bone marrow transplants. To ward off his body rejecting the cells, he took extremely harsh immunosuppressants that are no longer on the market because of their toxicity. The turmoil nearly killed him.

Because Brown’s immune system was almost completely destroyed and rebuilt, it led scientists to wonder if near-death was necessary to reboot the body and make it free of HIV.

Happily, the two new cases suggest it’s not. Although the two patients did receive chemotherapy for their cancer, the drugs specifically targeted their blood cells to clear them out and “make way” for the new transplant population.

Yet between Brown and the London patient, others have tried replicating the process. But everyone failed, in that the virus came back after withdrawing anti-viral drugs.

Scientists aren’t completely sure why they failed. One theory is that the source of blood stem cells matters, in the sense that grafted cells need to induce an immune response called graft-versus-host.

As the name implies, here the new cells viciously attack the host—something that doctors usually try to avoid. But in this case, the immune attack may be responsible for wiping out the last HIV-infected T cells, the “HIV reservoir,” allowing the host’s immune system to repopulate with a clean slate.

Complicating things even more, a small trial transplanting cell with normal CCR5 into HIV-positive blood cancer patients also found that the body was able to fight back the HIV onslaught—up to 88 months in one patient. Because immunosuppressants both limit the graft-versus-host/HIV attack and prevent HIV from infecting new cells, the authors suggest that time and dosage of these drugs could be essential to success.

One more ingredient further complicates the biological soup: only about half of HIV strains use CCR5 to enter cells. Other types, such as X4, rely on other proteins for entry. With CCR5 gone, these alternate strains could take over the body, perhaps more viciously without competition from their brethren.

So the New Patients Don’t Matter?

Yes, they do. The London patient is the first since Brown to live without detectable HIV load for over a year. This suggests that Brown isn’t a fluke—CCR5 is absolutely a good treatment target for further investigation.

That’s not to say the two patients are cured. Because HIV is currently only manageable, scientists don’t yet have a good definition of “cured.” Brown, now 12 years free of HIV, is by consensus the only one that fits the bill. The two new cases, though promising, are still considered in long-term remission.

As of now there are no accepted standards on how long a patient needs to be HIV-free before he is considered cured. What’s more, there are multiple ways to detect HIV load in the body—the Düsseldorf patient, for example, showed low signals of the virus using ultrasensitive tests. Whether the detected bits are enough to launch another HIV assault is anyone’s guess.

But the two new proof-of-concepts jolt the HIV-research sphere into a new era of hope with a promise: the disease, affecting 37 million people worldwide, can be cured.

What Next?

More cases may be soon to come.

The two cases were part of the IciStem program, a European collaboration that guides investigations into using stem cell transplantation as a cure for HIV. As of now, they have over 22,000 donors with the beneficial CCR5 Δ32 mutation, with 39 HIV-positive patients who have received transplants. More cases will build stronger evidence that the approach works.

However, stem cell transplants are obviously not practical as an everyday treatment option. But biotech companies are already actively pursuing CCR5-based leads in a two-pronged approach: one, attack the HIV reservoir of cells; two, supply the body with brand new replacements.

Translation? Use any method to get rid of CCR5 cells in immune cells.

Sangamo, based in California, is perhaps the most prominent player. In one trial, they edited CCR5 from extracted blood cells before infusing them back into the body—a sort of CAR-T for HIV. The number of edited cells weren’t enough to beat back HIV, but did clear out a large pool of the virus before it bounced back. With the advent of CRISPR making the necessary edits more efficient, more trials are already in the works.

Other efforts, expertly summarized by the New York Times, include making stem cells resistant to HIV—acting as a lifelong well of immune cells resistant to the virus—or using antibodies against CCR5.

Whatever the treatment, any therapy that targets CCR5 also has to consider this: deletion of the gene in the brain has cognitive effects, in that it enhances cognition (in mice) and improves brain recovery after stroke. For side effects, these are pretty awesome. But they also highlight just how little we still know about how the gene works outside the immune system.

Final Takeaway?

Despite all the complexities, these two promising cases add hope to an oft-beaten research community. Dr. Annemarie Wensing at the University Medical Center Utrecht summarized it well: “This will inspire people that a cure is not a dream. It’s reachable.”

Image Credit: Kateryna Kon /


via Singularity Hub

March 10, 2019 at 11:01PM

Air pollution: Cars should be banned near schools says public health chief

Air pollution: Cars should be banned near schools says public health chief

Exhaust pipeImage copyright Getty Images

Public health chiefs have called for cars to be banned around schools in the UK, reports say.

Paul Cosford, the medical director of Public Health England, told the Times it should be socially unacceptable to leave a car running near school gates.

The comments came as PHE published a series of recommendations on how the government can improve air quality.

PHE said 28,000 to 36,000 deaths a year in the UK could be attributed to long-term exposure to air pollution.

It is also calling for congestion charges to be imposed in cities across the UK.

It describes air pollution as the biggest environmental threat to health in the UK and says there is strong evidence that air pollution causes the development of coronary heart disease, stroke, respiratory disease and lung cancer, and exacerbates asthma.

In its review, it recommends:

  • Redesigning cities so people aren’t so close to highly polluting roads by, for example, designing wider streets or using hedges to screen against pollutants
  • Investing more in clean public transport as well as foot and cycle paths
  • Encouraging uptake of low emission vehicles by setting more ambitious targets for installing electric car charging points
  • Discouraging highly polluting vehicles from entering populated areas with incentives such as low emission or clean air zones

Media playback is unsupported on your device

Media captionUK scientists estimate air pollution cuts British people’s lives by an average of six months

Prof Cosford said: “Transport and urban planners will need to work together with others involved in air pollution to ensure that new initiatives have a positive impact.

“Decision makers should carefully design policies to make sure that the poorest in society are protected against the financial implications of new schemes.”

PHE said that national government policy could support these local actions – for example, they could allow controls on industrial emissions in populated areas to take account of health impacts.


via BBC News – Science & Environment

March 11, 2019 at 03:06PM

To help fight the opioid crisis, a new tool from Maps and Search

To help fight the opioid crisis, a new tool from Maps and Search

In 2017, the Department of Health and Human Services (HHS) declared the opioid crisis a public health emergency, with over 130 Americans dying every day from opioid-related drug overdoses.  Last month, we saw that search queries for “medication disposal near me” reached an all-time high on Google.


53 percent of prescription drug abuse starts with drugs obtained from family or friends, so we’re working alongside government agencies and nonprofit organizations to help people safely remove excess or unused opioids from their medicine cabinets. Last year, we partnered with the U.S. Drug Enforcement Administration (DEA) for National Prescription Take Back Day by developing a Google Maps API  locator tool to help people dispose of their prescription drugs at temporary locations twice a year. With the help of this tool, the DEA and its local partners collected a record 1.85 million pounds of unused prescription drugs in 2018.

Today, we’re making it easier for Americans to quickly find disposal locations on Google Maps and Search all year round. A search for queries like “drug drop off near me” or “medication disposal near me” will display permanent disposal locations at your local pharmacy, hospital or government building so you can quickly and safely discard your unneeded medication.


This pilot has been made possible thanks to the hard work of many federal agencies, states and pharmacies. Companies like Walgreens and CVS Health, along with state governments in Alabama, Arizona, Colorado, Iowa, Massachusetts, Michigan and Pennsylvania have been instrumental in this project, contributing data with extensive lists of public and private disposal locations. The DEA is already working with us to provide additional location data to expand the pilot.

For this pilot, we also looked to public health authorities—like HHS—for ideas on how technology can help communities respond to the opioid crisis. In fact, combining disposal location data from different sources was inspired by a winning entry at the HHS’s Opioid Code-A-Thon held a year ago.

We’ll be working to expand coverage and add more locations in the coming months. To learn more about how your state or business can bring more disposal locations to Google Maps and Search, contact today.

via The Official Google Blog

February 22, 2019 at 02:14AM

Allegations Against the Maker of OxyContin Are Piling Up. Here’s What They Could Mean for the Billionaire Family Behind Purdue Pharma

Allegations Against the Maker of OxyContin Are Piling Up. Here’s What They Could Mean for the Billionaire Family Behind Purdue Pharma

Executives from Purdue Pharma, the manufacturer of the powerful opioid painkiller OxyContin, admitted in federal court in 2007 that Purdue’s marketing practices and interactions with doctors had understated the strength and addictive potential of the drug — an omission that many experts believe contributed to an opioid epidemic that claimed nearly 50,000 American lives in 2017 alone.

But on Thursday, the release of a previously sealed deposition from 2015 showed that Purdue executives knew of OxyContin’s strength long before that $600 million settlement. The deposition, which had been filed in court, revealed that Dr. Richard Sackler — part of the family that founded and controls Purdue, and who has served as Purdue’s president and co-chairman of the board — knew as early as 1997 that OxyContin was much stronger than morphine, but chose not to share that knowledge with doctors.

“We are well aware of the view held by many physicians that oxycodone [the active ingredient in OxyContin] is weaker than morphine. I do not plan to do anything about that,” Purdue’s head of sales and marketing, Michael Friedman, wrote in an email to Sackler, according to the deposition, which was obtained by ProPublica and co-published with STAT. “I agree with you,” Sackler wrote back. “Is there a general agreement, or are there some holdouts?”

The document’s publication comes just weeks after the release of an unredacted 277-page lawsuit filed against Purdue by Massachusetts Attorney General Maura Healey — itself just one of thousands of legal complaints brought against Purdue and other pharmaceutical companies by plaintiffs across the country, many of which have been rolled into one multi-district litigation in Ohio federal court. And as the evidence mounts, legal experts say Purdue could face serious consequences, from astronomical fines to injunctions that could threaten its ability to do business.

“One theme that clearly emerges from this deposition, brick by brick, is the foundation that is laid, that shows how even after this guilty plea there was a shocking lack of care for people that were at risk of abusing this drug and instead a singular focus on profit,” says Joseph Khan, a Philadelphia-based attorney who is currently bringing suits against corporations involved in the opioid epidemic.

As the New York Times reported, parts of Sackler’s deposition are in conflict with his previous testimony. For example, a 2006 Department of Justice report suggested he knew in 1999 that users in internet chatrooms were discussing abuse of the drug. In the deposition, however, Sackler said he first learned of its street value in a 2000 Maine newspaper article.

In a statement provided to TIME, Purdue said the “intentional leak of the deposition is a clear violation of the court’s order and, as such, is regrettable.” The statement adds that, “Dr. Sackler described Purdue’s efforts to adhere to all relevant laws and regulations and to appropriately reflect OxyContin’s risks of abuse and addiction as the science of opioid pain therapy evolved over time.”

Much of the material included in the deposition pertains to activity carried out before the company’s 2007 settlement, while Healey’s suit relates to post-2007 behavior. But Khan says the ramifications of the document are still relevant today, given the judgements Purdue could face from juries.

“There are straight contradictions between what’s in here and what the Department of Justice has put together. This is not something that will play well in front of a jury,” Khan says. “They don’t have as much leverage as they might want.”

The Massachusetts complaint also includes dramatic accusations about how much Purdue executives knew about their blockbuster drug, and when they knew it.

According to lawsuit, members of the Sackler family and other Purdue executives purposefully downplayed the addictive properties of OxyContin, and promoted sales tactics meant to encourage doctors to prescribe as much OxyContin, in the highest doses and longest durations, as possible — despite the potential risks for abuse, and despite the terms of Purdue’s prior settlement with the federal government. The suit also details Purdue’s plans to sell addiction treatments, helping them dominate “the pain and addiction spectrum.” Purdue’s board, controlled by the Sacklers, also voted to pay out $4 billion to the family between 2007 and 2018, the documents show.

In a statement provided to TIME, a Purdue representative said the attorney general’s office “seeks to publicly vilify Purdue, its executives, employees and directors by taking out of context snippets from tens of millions of documents and grossly distorting their meaning. The complaint is riddled with demonstrably inaccurate allegations,” they said, and “offers little evidence to support its sweeping legal claims.” Purdue fought to keep portions of the suit from being released publicly.

If successful, Massachusetts’ lawsuit could force Purdue to pay not only significant fines, but also require the company to cease certain behaviors and make efforts to remedy the damages it has allegedly caused, Khan says.

“If you think about what would restitution look like, these are staggering, almost incalculable costs,” Khan says. But the problem goes beyond money. “What would it mean to stop this epidemic they’re accused of putting into place?” he asks. “You’re not going to find anyone who knows anything about the opioid epidemic who will just say you can solve this problem overnight with a quick fix.”

Further complicating matters, Purdue’s future hinges on far more than a single lawsuit.

John Jacobi, a professor of health law and policy at Seton Hall Law School, called the Massachusetts complaint “extraordinary in the length and depth of the allegations against individual defendants,” but says it is “more or less consistent” with the roughly 1,200 complaints included in the Ohio MDL, as well as the hundreds of others individually making their way through state court systems.

And for that reason, Jacobi says, Purdue could be facing consequences much larger than those included in Healey’s complaint. Opioid manufacturers could face a situation similar to the 1998 Master Settlement Agreement with Big Tobacco, which forced five major manufacturers to pay out billions of dollars over cigarette marketing and promotional practices. (Mike Moore, the lawyer who orchestrated the Master Settlement Agreement, is now bringing a new suit against opioid distributors and manufacturers. He was not immediately available for comment to TIME.)

“Many people have suggested that the only way out of the thicket that all of these litigants find themselves in would be some sort of global settlement similar to what was achieved in the tobacco litigation, and I don’t think that’s a far-fetched suggestion,” Jacobi says. “All of those, at some point, will be gathered up and resolved.”

Khan agrees that the volume of lawsuits in the MDL could hold a major threat for opioid manufacturers. And the results of MDL cases set for trial later this year will likely set the tone for other individual suits, like Healey’s, filed around the country, he says.

“There becomes a point at which it becomes mathematically impossible for every one of those plaintiffs to receive what they’re seeking,” Khan says. “Some of these companies are not going to be equipped to survive. Purdue may or may not be differently situated.”


via Healthland

February 23, 2019 at 07:32AM

Encrypting DNS end-to-end

Encrypting DNS end-to-end

Over the past few months, we have been running a pilot with Facebook to test the feasibility of securing the connection between and Facebook’s authoritative name servers. Traditionally, the connection between a resolver and an authoritative name server is unencrypted i.e. over UDP.

In this pilot we tested how an encrypted connection using TLS impacts the end-to-end latency between and Facebook’s authoritative name servers. Even though the initial connection adds some latency, the overhead is amortized over many queries. The resulting DNS latency between and Facebook’s authoritative name servers is on par with the average UDP connections.

To learn more about how the pilot went, and to see more detailed results, check out the complete breakdown over on Code, Facebook’s Engineering blog.


via The Cloudflare Blog

December 22, 2018 at 01:02AM

Athenian Project Turns One: Are Election Websites Safer?

Athenian Project Turns One: Are Election Websites Safer?

One year ago, Cloudflare launched the Athenian Project to provide free Enterprise-level service to election and voter registration websites run by state and local governments in the United States. Through this project, we have helped over 100 entities in 24 states protect their websites from denial of service attacks, SQL injection, and other malicious efforts aimed at undermining the integrity of their elections. With the end of the year approaching, and the November 6th US midterm elections behind us, we wanted to look back at the project and what we have learned as we move towards 2020.

US Midterm Election Day

The morning of November 6th was full of anticipation for the Athenian Project team with the policy, engineering and support teams ready as polls opened in the East. Early in the day, we were notified by our partner at the CDT that some elections websites were experiencing downtime. Mobilizing to help these groups, we reached out to the website administrators and, through the course of the day, on-boarded over 30 new county-level websites to the Athenian Project and helped them manage the unpredictably large amounts of legitimate traffic.

This last-minute effort would not have been possible without the help of the CDT and all of the other organizations working to maintain election integrity. Each organization brings their own strengths, and it took everyone working together, as well as preparation and diligence on the part of election officials, to make election day a success.

I Voted Stickers— Creative Commons Attribution Element5 Digital on Pexels

Civic Engagement Online

In looking at the aggregated election day data, the biggest story is one of engagement. In the month leading up to the November election, voter registration and election websites on the Athenian Project received nearly three times the number of requests as in September or any other month preceding it. Athenian Project websites received more requests in just the first seven days of November than in any other month except October.

When we first started the Athenian Project, we expected denial of service and other attacks to be the driving concern. However, we soon found that many state and local election websites experience large fluctuations in legitimate traffic on election day, especially in the event of a contested election, and appreciated having a CDN to help manage these events. As can be seen below, traffic levels, already higher than usual on election day, at times suddenly spiked to four times above the day’s average for certain websites.

Requests to Athenian Project websites on 11/6/18

Keeping a Lookout for Bad Actors

We are happy to report that we didn’t see any evidence of a coordinated set of attacks across the election websites on our service. There were, however, a variety of attacks stopped by rules within our Web Application Firewall (WAF). The prevented attacks included scans by malicious bots impersonating helpful bots. These scans enable malicious actors to check for vulnerabilities to exploit, and were stopped using fake user-agent rules which can identify the malicious bot’s attempt to spoof its identity. The WAF also stopped a variety of cross-site scripting attempts, forced login attempts, and SQL injection attacks aimed at gaining access to databases. The attacks appear to have been Internet-wide attacks targeting specific known vulnerabilities rather than election website specific attacks. This finding re-enforces our belief that improving cybersecurity is vital for everyone on the Internet every day, not just in response to large events.

Where We’re Going in 2019

Moving forward, we are hoping to continue improving the reach of the project. One year is a relatively short time, especially when considering code freezes around both the primaries and general elections, and we hope to continue education efforts and on-boardings in advance of the 2020 elections. One item we noticed was that, despite making it easy to obtain SSL certificates and use TLS on Cloudflare, not all of the requests to Athenian Project websites are encrypted. This happens either as a result of misconfiguration, or because Universal SSL has been disabled for the site and no non-Cloudflare certificates have been uploaded. As a result, we will strive to do a better job of encouraging SSL adoption and educating website administrators about the importance of encryption.

US Capital Building— Creative Commons Attribution on Pixabay

We would like to thank election officials and administrators across the country for their hard work in maintaining the integrity of our midterm elections. Election cybersecurity was not a story, and that is a testament to the commitment of these individuals.

With the midterm elections over, the Cloudflare Athenian Project team is setting our sights on 2020 and any special elections which may come before then as well as looking at opportunities to expand the Athenian Project into new areas. If you run a state or local election website and are interested in the Athenian Project, feel free to reach out through our web form at


via The Cloudflare Blog

December 22, 2018 at 04:06AM

Improving request debugging in Cloudflare Workers

Improving request debugging in Cloudflare Workers

At Cloudflare, we are constantly looking into ways to improve development experience for Workers and make it the most convenient platform for writing serverless code.

As some of you might have already noticed either from our public release notes, on or in your Cloudflare Workers dashboard, there recently was a small but important change in the look of the inspector.

But before we go into figuring out what it is, let’s take a look at our standard example on

The example worker code featured here acts as a transparent proxy, while printing requests / responses to the console.

Commonly, when debugging Workers, all you could see from the client-side devtools is the interaction between your browser and the Cloudflare Worker runtime. However, like in most other server-side runtimes, the interaction between your code and the actual origin has been hidden.

This is where console.log comes in. Although not the most convenient, printing random things out is a fairly popular debugging technique.

Unfortunately, its default output doesn’t help much with debugging network interactions. If you try to expand either of request or response objects, all you can see is just a bunch of lazy accessors:

You could expand them one-by-one, getting some properties back, but, when it comes to important parts like headers, that doesn’t help much either:

So, since the launch of Workers, what we have been able to suggest instead is certain JS tricks to convert headers to a more readable format:

This works somewhat better, but doesn’t scale well, especially if you’re trying to debug complex interactions between various requests on a page and subrequests coming from a worker. So we thought: how can we do better?

If you’re familiar with Chrome DevTools, you might have noticed before that we were already offering its trimmed-down version in our UI with basic Console and Sources panels. The obvious solution is: why not expose the existing Network panel in addition to these? And we did just* that.

* Unfortunately, this is easier said than done. If you’re already faimilar with the Network tab and are interested in the technical implementation details, feel free to skip the next section.

What can you do with the new panel?

You should be able to use most of the things available in regular Chrome DevTools Network panel, but instead of inspecting the interaction between browser and Cloudflare (which is as much as browser devtools can give you), you are now able to peek into the interaction between your Worker and the origin as well.

This means you’re able to view request and response headers, including both those internal to your worker and the ones provided by Cloudflare:

Check the original response to verify content modifications:

Same goes for raw responses:

You can also check the time it took worker to reach and get data from your website:

However, note that timings from a debugging service will be different than the ones in production in different locations, so it would make sense to compare these only with other requests on the same page or with the same request as you keep iterating on code of your Worker.

You can view the initiator of each request – this might come in handy if your worker contains complex routing handled by different paths, or if you want to simply check which requests on the page were intercepted and re-issued at all:

Basic features like filtering by type of content also work:

And, finally, you can copy or even export subrequests as HAR for further inspection:

How did we do this?

So far we have been using a built-in mode of the inspector which was specifically designed with JavaScript-only targets in mind. This allows it to avoid loading most of the components that would require a real browser (Chromium-based) backend, and instead leaves just the core that can be integrated directly with V8 in any embedder, whether it’s Node.js or, in our case, Cloudflare Workers.

Luckily, the DevTools Protocol itself is pretty well documented – – to facilitate third-party implementors.

While this is commonly used from client-side (for editor integration), there are some third-party implementors of the server-side too, even for non-JavaScript targets like Lua, Go, ClojureScript and even system-wide network debugging both on desktop and mobile:

So there is nothing preventing us from providing our own implementation of Network domain that would give a native DevTools experience.

On the Workers backend side, we are already in charge of the network stack, which means we have access to all the necessary information to report and can wrap all the request/response handlers into own hooks to send it back to the inspector.

Communication between the inspector and the debugger backend is happening over WebSockets. So far we’ve been just receiving messages and passing them pretty much directly to V8 as-is. However, if we want to handle Network messages ourselves, that’s not going to work anymore and we need to actually parse the messages.

To do that in a standard way, V8 provides some build scripts to generate protocol handlers for any given list of domains. While these are used by Chromium, they require quite a bit of configuration and custom glue for different levels of message serialisation, deserialisation and error handling.

On the other hand, the protocol used for communication is essentially just JSON-RPC, and capnproto, which we’re already using in other places behind the scenes, provides JSON (de)serialisation support, so it’s easier to reuse it rather than build a separate glue layer for V8.

For example, to provide bindings for Runtime.callFrame we need to just define a capnp structure like this:

struct CallFrame {
  # Stack entry for runtime errors and assertions.
  functionName @0 :Text; # JavaScript function name.
  scriptId @1 :ScriptId; # JavaScript script id.
  url @2 :Text; # JavaScript script name or url.
  lineNumber @3 :Int32; # JavaScript script line number (0-based).
  columnNumber @4 :Int32; # JavaScript script column number (0-based).

Okay, so by combining these two we can now parse and handle supported Network inspector messages ourselves and pass the rest through to V8 as usual.

Now, we needed to make some changes to the frontend. Wait, you might ask, wasn’t the entire point of these changes to speak the same protocol as frontend already does? That’s true, but there are other challenges.

First of all, because Network tab was designed to be used in a browser, it relies on various components that are actually irrelevant to us and, if pulled in as-is, would not only make frontend code larger, but also require extra backend support too. Some of them are used for cross-tab integration (e.g. with Profiler), but some are part of the Network tab itself – for example, it doesn’t make much sense to use request blocking or mobile throttling when debugging server-side code. So we had some manual untangling to do here.

Another interesting challenge was in handling response bodies. Normally, when you click on a request in Network tab in the browser, and then ask to see its response body, devtools frontend sends a Network.getResponseBody message to the browser backend and then the browser sends it back.

What this means is that, as long as the Network tab is active, browser has to store all of the responses for all of the requests from the page in memory, not knowing which of them are actually going to be requested in the future or not. Such lazy handling makes perfect sense for local or even remote Chrome debugging, where you are commonly fully in charge of both sides.

However, for us it wouldn’t be ideal to have to store all of these responses from all of the users in memory on the debugging backend. After some forth and back on different solutions, we decided to deviate from the protocol and instead send original response bodies to the inspector frontend as they come through, and let frontend store them instead. This might seem not ideal either due to sending unnecessary data over the network during debugging sessions, but these tradeoffs make more sense for a shared debugging backend.

There were various smaller challenges and bug fixes to be made and upstreamed, but let them stay behind the scenes.

Is this feature useful to you? What other features would help you to debug and develop workers more efficiently? Or maybe you would like to work on Workers and tooling yourself?

Let us know!

P.S.: If you’re looking for a fun personal project for the holidays, this could be your chance to try out Workers, and play around with our new tools.


via The Cloudflare Blog

December 28, 2018 at 11:23PM