Making Algorithms More Like Kids: What Can Four-Year-Olds Do That AI Can’t?

Making Algorithms More Like Kids: What Can Four-Year-Olds Do That AI Can’t?

https://meson.in/2jJFgAH

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain.

Alan Turing famously wrote this in his groundbreaking 1950 paper Computing Machinery and Intelligence, and laid the framework for generations of machine learning scientists to follow. Yet, despite increasingly impressive specialized applications and breathless predictions, we’re still some distance from programs that can simulate any mind, even one much less complex than a human’s.

Perhaps the key came in what Turing said next: “Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed.” This seems, in hindsight, naive. Moravec’s paradox applies: things that seem like the height of human intellect, like a good stimulating game of chess, are easy for machines, while simple tasks can be extremely difficult. But if children are our template for the simplest general human-level intelligence we might program, then surely it makes sense for AI researchers to study the many millions of existing examples.

This is precisely what Professor Alison Gopnik and her team at Berkeley do. They seek to answer the question: how sophisticated are children as learners? Where are children still outperforming the best algorithms, and how do they do it?

General, Unsupervised Learning

Some of the answers were outlined in a recent talk at the International Conference on Machine Learning. The first and most obvious difference between four-year-olds and our best algorithms is that children are extremely good at generalizing from a small set of examples. ML algorithms are the opposite: they can extract structure from huge datasets that no human could ever process, but generally large amounts of training data are needed for good performance.

This training data usually has to be labeled, although unsupervised learning approaches are also making progress. In other words, there is often a strong “supervisory signal” coded into the algorithm and its dataset, consistently reinforcing the algorithm as it improves. Children can learn to perform generally on a wide variety of tasks with very little supervision, and they can generalize what they’ve learned to new situations they’ve never seen before.

Even in image recognition, where ML has made great strides, algorithms require a large set of images before they can confidently distinguish objects; children may only need one. How is this achieved?

Professor Gopnik and others argue that children have “abstract generative models” that explain how the world works. In other words, children have imagination: they can ask themselves abstract questions like “If I touch this sharp pin, what will happen?” And then, from very small datasets and experiences, they can anticipate the solution.

In doing so, they are correctly inferring the relationship between cause and effect from experience. Children know that the reason that this object will prick them unless handled with care is because it’s pointy, and not because it’s silver or because they found it in the kitchen. This may sound like common sense, but being able to make this kind of causal inference from small datasets is still hard for algorithms to do, especially across such a wide range of situations.

The Power of Imagination

Generative models are increasingly being employed by AI researchers—after all, the best way to show that you understand the structure and rules of a dataset is to produce examples that obey those rules. Such neural networks can compress hundreds of gigabytes of image data into hundreds of megabytes of statistical parameter weights and learn to produce images that look like the dataset. In this way, they “learn” something of the statistics of how the world works. But to do what children can and generalize with generative models is computationally infeasible, according to Gopnik.

This is far from the only trick children have up their sleeve which machine learning hopes to copy. Experiments from Professor Gopnik’s lab show that children have well-developed Bayesian reasoning abilities. Bayes’ theorem is all about assimilating new information into your assessment of what is likely to be true based on your prior knowledge. For example, finding an unfamiliar pair of underwear in your partner’s car might be a worrying sign—but if you know that they work in dry-cleaning and use the car to transport lost clothes, you might be less concerned.

Scientists at Berkeley present children with logical puzzles, such as machines that can be activated by placing different types of blocks or complicated toys that require a certain sequence of actions to light up and make music.

When they are given several examples (such as a small dataset of demonstrations of the toy), they can often infer the rules behind how the new system works from the age of three or four. These are Bayesian problems: the children efficiently assimilate the new information to help them understand the universal rules behind the toys. When the system isn’t explained, the children’s inherent curiosity leads them to experimenting with these systems—testing different combinations of actions and blocks—to quickly infer the rules behind how they work.

Indeed, it’s the curiosity of children that actually allows them to outperform adults in certain circumstances. When an incentive structure is introduced—i.e. “points” that can be gained and lost depending on your actions—adults tend to become conservative and risk-averse. Children are more concerned with understanding how the system works, and hence deploy riskier strategies. Curiosity may kill the cat, but in the right situation, it can allow children to win the game by identifying rules that adults miss by avoiding any action that might result in punishment.

To Explore or to Exploit?

This research shows not only the innate intelligence of children, but also touches on classic problems in algorithm design. The explore-exploit problem is well known in machine learning. Put simply, if you only have a certain amount of resources-time, computational ability, etc.—are you better off searching for new strategies, or simply taking the path that seems to most obviously lead to gains?

Children favor exploration over exploitation. This is how they learn—through play and experimentation with their surroundings, through keen observation and asking as many questions as they can. Children are social learners: as well as interacting with their environment, they learn from others. Anyone who has ever had to deal with a toddler endlessly using that favorite word, “why?”, will recognize this as a feature of how children learn! As we get older—kicking in around adolescence in Gopnik’s experiments—we switch to exploiting the strategies we’ve already learned rather than taking those risks.

These concepts are already being imitated in machine learning algorithms. One example is the idea of “temperature” for algorithms that look through possible solutions to a problem to find the best one. A high-temperature search is more likely to pick a random move that might initially take you further away from the reward. This means that the optimization is less likely to get “stuck” on a particular solution that’s hard to improve upon, but may not be the best out there—but it’s also slower to find a solution. Meanwhile, searches with lower temperature take fewer “risky” random moves and instead seek to refine what’s already been found.

In many ways, humans develop in the same way, from high-temperature toddlers who bounce around playing with new ideas and new solutions even when they seem strange to low-temperature adults who take fewer risks, are more methodical, but also less creative. This is how we try to program our machine learning algorithms to behave as well.

It’s nearly 70 years since Turing first suggested that we could create a general intelligence by simulating the mind of a child. The children he looked to for inspiration in 1950 are all knocking on the door of old age today. Yet, for all that machine learning and child psychology have developed over the years, there’s still a great deal that we don’t understand about how children can be such flexible, adaptive, and effective learners.

Understanding the learning process and the minds of children may help us to build better algorithms, but it could also help us to teach and nurture better and happier humans. Ultimately, isn’t that what technological progress is supposed to be about?

Image Credit: BlueBoeing / Shutterstock.com

Think.intellect

via Singularity Hub https://meson.in/2EASxAx

June 26, 2019 at 11:15PM

Cancer-Killing Living Drug Is Made Safer With a Simple Off Switch

Cancer-Killing Living Drug Is Made Safer With a Simple Off Switch

https://meson.in/2jKzSwS

When it comes to battling cancer, our most powerful weapon is also our most dangerous.

You’ve heard of CAR-T: the cellular immunotherapy extracts a patient’s own immune cells, amps up their tumor-hunting prowess using gene therapy, and infuses the super-soldiers back into the patient to pursue and rip their targets to shreds—literally. Since late 2017, the FDA has approved CAR-T therapy for leukemia and lymphoma, deadly childhood cancers generally unmanageable with classic chemotherapy or radiation. In the realm of revolutionary treatments, CAR-T absolutely fits the bill.

But there’s a serious problem.

Unlike traditional chemicals, CAR-T cells are living drugs that further proliferate inside the body. While great for replenishing their cancer-killing troops, it comes with the deadly caveat that the cells may go full-on berserk. Once unleashed, there are few ways to control their activity. In some cases, the good guys turn monstrous, releasing chemicals in a cascade that propel the body into immune overdrive. Left uncontrolled, the result is often fatal.

This week, a collaboration between the University Hospital in Würzburg, Germany and the Memorial Sloan Kettering Cancer Center in New York found an easy and reliable way to slam on the CAR-T brake. Rather than acting on CAR-T cells, the antidote severs downstream actions of the cells, leaving them in a dormant state that can be re-awakened.

The drug, called dasatinib, essentially puts CAR-T on a leash—one strong enough to stop deadly runaway immune reactions in their tracks. Currently approved for some types of leukemia, dasatinib is an old-school drug with over a decade of history and is intimately familiar to the oncology world.

“The evaluation and implementation of dasatinib as an on/off control drug in CAR-T cell immunotherapy should be feasible and straightforward,” the authors wrote. The results were published in Science Translational Medicine, and matched independent conclusions from another team.

A Dial for Killer Cells

Rather than focusing on the cells themselves, the team looked at what happens after CAR-T cells grab onto their target.

As immune cells, CAR-T soldiers already have protein “claws” embedded on their surface that recognize all sorts of invaders such as bacteria. CAR-Ts, however, are further armed with genetically engineered claws that more efficiently hunt down a particular type of tumor.

These claws are short-range weapons. The cells need to physically interact with their target by grabbing onto proteins dotted on the cancer cell’s surface with the claws. This “handshake” causes a cascade of biochemical reactions inside the CAR-T cells, which triggers them to release a hefty cloud of immune chemicals—dubbed cytokines—toxic to the tumor. The end result is rather grisly: the tumor “melts,” literally breaking apart into tiny bio-building-blocks that the body subsequently absorbs or expels.

From previous research, the team noticed that dasatinib quiets down one of the molecules involved in CAR-T’s the chain reaction after the handshake. So it makes sense that blocking the deadly game of telephone can halt CAR-T actions.

They first tested their idea in cultured tumor cells in petri dishes. Using a popular CAR-T recipe—one with high rates of complete remission in recent clinical trials—the team challenged the tumors with their engineered killers, either with or without dasatinib. Remarkably, treatment with the drug completely halted CAR-T’s ability to rip their targets apart. One direct dose worked for hours, and when given multiple doses, the drug could inhibit the cells’ activity for at least a week.

Encouraged, the team tried the drug on several other CAR-T recipes, which all trigger the same downstream reaction. The trick worked every time. It suggests that any CAR-T cells that use this “telephone” pathway can be controlled using dasatinib, the team concluded.

The drug was also tunable and reversible, two extremely powerful traits in pharmaceutics.

Tunable means the drug’s effect depends on dose: like turning a dial, the team can predictably control its inhibitory action by how much they add. And when CAR-Ts need to go back into full force, all the team has to do is sit and wait—literally—for the cell to metabolize the drug away. As soon as the levels drop, CAR-Ts spring back into action with no side effects.

More clinically relevant, the drug didn’t just work in isolated cells; it also worked in mice with tumors. With just two doses, the team was able to keep CAR-T therapy in check. Once they stopped the treatment, CAR-T cells sprung back, and the team again detected their chemical attacks on the tumors. Because CAR-T cells are expensive to engineer, that’s a huge perk. It means that they can linger inside the body awaiting orders, without a full withdrawal of the troops.

An Antidote for Immune Overreaction

Ask any oncologist, and “cytokine storm” is on top of their list for CAR-T dangers. Because these cells proliferate inside the body, they can in some conditions—specifics still unclear—dump a bucketful of toxic immune molecules into the body. This purging action then causes native immune cells to respond in kind, releasing their own cytokines.

“It’s a runaway response,” said Travis Young at the California Institute for Biomedical Research. “There’s no way to control if that patient will have a 100-, a 1,000-, or a 10,000-fold expansion of their CAR-T cells.”

The result is a tornado-scale immune reaction that destroys indiscriminately, tumor or not. In some patients, it’s a death sentence. Because berserk CAR-T cells form the root of the immune tornado, the team tested in mice whether dasatinib can neutralize the deadly side effect. Here, they used a mouse model previously shown to induce an extreme cytokine storm. While all of the tumor-laden mice received CAR-T, some also got a shot of dasatinib three hours later.

Without the antidote, 75 percent of CAR-T infused mice died within two days. With the drug, fatality dropped to 30 percent. It’s not zero, but it does mean that some patients may be saved.

Control Is King

Because dasatinib has been around for over a decade, there’s plenty of data on how human bodies handle the drug. The team believes that popping a pill every six hours—or at even longer intervals—should allow enough drug inside the body to control CAR-T in patients.

This level of control has so far been outside the grasp of oncologists, despite numerous previous ideas. One such suggested method is to build an off switch directly into the cells. Although effective, once activated it also destroys any tumor-killing ability. To continue the treatment, the patient would have to start from scratch.

“As a consequence, physicians and patients have been reluctant to use these safety switches, even when side effects…were severe,” the authors explained.

Another common treatment is steroids. When directly pitted against dasatinib, however, the team found that steroids are slow responders less effective at controlling CAR-T action. Steroids also increase the risk of infections, whereas dasatinib may actually work together with CAR-T to further enhance cancer-treatment efficacy.

To Michael Gilman, CEO of Obsidian Therapeutics based in Massachusetts, the future of CAR-T is bright, not only for blood cancers but also for solid tumors, so long as it’s under control.

“In order for that to happen, these therapies have to be tamed. They have to behave like pharmaceuticals where doses can be controlled and sensitively managed by everyday physicians,” Gilman said.

Image Credit: Meletios Verras / Shutterstock.com

Think.intellect

via Singularity Hub https://meson.in/2EASxAx

July 10, 2019 at 11:00PM

Can AI Save the Internet from Fake News?

Can AI Save the Internet from Fake News?

https://meson.in/2XgKUaW

There’s an old proverb that says “seeing is believing.” But in the age of artificial intelligence, it’s becoming increasingly difficult to take anything at face value—literally.

The rise of so-called “deepfakes,” in which different types of AI-based techniques are used to manipulate video content, has reached the point where Congress held its first hearing last month on the potential abuses of the technology. The congressional investigation coincided with the release of a doctored video of Facebook CEO Mark Zuckerberg delivering what appeared to be a sinister speech.

Scientists are scrambling for solutions on how to combat deepfakes, while at the same time others are continuing to refine the techniques for less nefarious purposes, such as automating video content for the film industry.

At one end of the spectrum, for example, researchers at New York University’s Tandon School of Engineering have proposed implanting a type of digital watermark using a neural network that can spot manipulated photos and videos.

The idea is to embed the system directly into a digital camera. Many smartphone cameras and other digital devices already use AI to boost image quality and make other corrections. The authors of the study out of NYU say their prototype platform increased the chances of detecting manipulation from about 45 percent to more than 90 percent without sacrificing image quality.

On the other hand, researchers at Carnegie Mellon University recently hit on a technique for automatically and rapidly converting large amounts of video content from one source into the style of another. In one example, the scientists transferred the facial expressions of comedian John Oliver onto the bespectacled face of late night show host Stephen Colbert.

The CMU team says the method could be a boon to the movie industry, such as by converting black and white films to color, though it also conceded that the technology could be used to develop deepfakes.

Words Matter with Fake News

While the current spotlight is on how to combat video and image manipulation, a prolonged trench warfare on fake news is being fought by academia, nonprofits, and the tech industry.

This isn’t the fake news that some have come to use as a knee-jerk reaction to fact-based information that might be less than flattering to the subject of the report. Rather, fake news is deliberately-created misinformation that is spread via the internet.

In a recent Pew Research Center poll, Americans said fake news is a bigger problem than violent crime, racism, and terrorism. Fortunately, many of the linguistic tools that have been applied to determine when people are being deliberately deceitful can be baked into algorithms for spotting fake news.

That’s the approach taken by a team at the University of Michigan (U-M) to develop an algorithm that was better than humans at identifying fake news—76 percent versus 70 percent—by focusing on linguistic cues like grammatical structure, word choice, and punctuation.

For example, fake news tends to be filled with hyperbole and exaggeration, using terms like “overwhelming” or “extraordinary.”

“I think that’s a way to make up for the fact that the news is not quite true, so trying to compensate with the language that’s being used,” Rada Mihalcea, a computer science and engineering professor at U-M, told Singularity Hub.

The paper “Automatic Detection of Fake News” was based on the team’s previous studies on how people lie in general, without necessarily having the intention of spreading fake news, she said.

“Deception is a complicated and complex phenomenon that requires brain power,” Mihalcea noted. “That often results in simpler language, where you have shorter sentences or shorter documents.”

AI Versus AI

While most fake news is still churned out by humans with identifiable patterns of lying, according to Mihalcea, other researchers are already anticipating how to detect misinformation manufactured by machines.

A group led by Yejin Choi, with the Allen Institute of Artificial Intelligence and the University of Washington in Seattle, is one such team. The researchers recently introduced the world to Grover, an AI platform that is particularly good at catching autonomously-generated fake news because it’s equally good at creating it.

“This is due to a finding that is perhaps counterintuitive: strong generators for neural fake news are themselves strong detectors of it,” wrote Rowan Zellers, a PhD student and team member, in a Medium blog post. “A generator of fake news will be most familiar with its own peculiarities, such as using overly common or predictable words, as well as the peculiarities of similar generators.”

The team found that the best current discriminators can classify neural fake news from real, human-created text with 73 percent accuracy. Grover clocks in with 92 percent accuracy based on a training set of 5,000 neural network-generated fake news samples. Zellers wrote that Grover got better at scale, identifying 97.5 percent of made-up machine mumbo jumbo when trained on 80,000 articles.

It performed almost as well against fake news created by a powerful new text-generation system called GPT-2 built by OpenAI, a nonprofit research lab founded by Elon Musk, classifying 96.1 percent of the machine-written articles.

OpenAI had so feared that the platform could be abused that it has only released limited versions of the software. The public can play with a scaled-down version posted by a machine learning engineer named Adam King, where the user types in a short prompt and GPT-2 bangs out a short story or poem based on the snippet of text.

No Silver AI Bullet

While real progress is being made against fake news, the challenges of using AI to detect and correct misinformation are abundant, according to Hugo Williams, outreach manager for Logically, a UK-based startup that is developing different detectors using elements of deep learning and natural language processing, among others. He explained that the Logically models analyze information based on a three-pronged approach.

  • Publisher metadata: Is the article from a known, reliable, and trustworthy publisher with a history of credible journalism?
  • Network behavior: Is the article proliferating through social platforms and networks in ways typically associated with misinformation?
  • Content: The AI scans articles for hundreds of known indicators typically found in misinformation.

“There is no single algorithm which is capable of doing this,” Williams wrote in an email to Singularity Hub. “Even when you have a collection of different algorithms which—when combined—can give you relatively decent indications of what is unreliable or outright false, there will always need to be a human layer in the pipeline.”

The company released a consumer app in India back in February just before that country’s election cycle that was a “great testing ground” to refine its technology for the next app release, which is scheduled in the UK later this year. Users can submit articles for further scrutiny by a real person.

“We see our technology not as replacing traditional verification work, but as a method of simplifying and streamlining a very manual process,” Williams said. “In doing so, we’re able to publish more fact checks at a far quicker pace than other organizations.”

“With heightened analysis and the addition of more contextual information around the stories that our users are reading, we are not telling our users what they should or should not believe, but encouraging critical thinking based upon reliable, credible, and verified content,” he added.

AI may never be able to detect fake news entirely on its own, but it can help us be smarter about what we read on the internet.

Image Credit: Dennis Lytyagin / Shutterstock.com

Think.intellect

via Singularity Hub https://meson.in/2EASxAx

June 30, 2019 at 11:01PM

Facebook Quietly Admitted Millions More Instagram Users’ Passwords Were at Risk

Facebook Quietly Admitted Millions More Instagram Users’ Passwords Were at Risk

https://meson.in/2PklN4P

(SAN FRANCISCO) — Millions more Instagram users were affected by a password security lapse than parent company Facebook acknowledged nearly four weeks ago.

The social media giant said in late March that it had inadvertently stored passwords in plain text, making it possible for its thousands of employees to search them. It said the passwords were stored on internal company servers, where no outsiders could access them.

Facebook said in a blog post Thursday that it now estimates that “millions” of Instagram users were affected by the lapse, instead of the “tens of thousands” it had originally reported. It had also said in March that the issue affected “hundreds of millions” of Facebook Lite users and millions of Facebook users. Facebook Lite is designed for people with older phones or slow internet connections.

Science.general

via Techland https://meson.in/2DLLW54

April 19, 2019 at 06:00AM

Top Takeaways From The Economist Innovation Summit

Top Takeaways From The Economist Innovation Summit

https://meson.in/2TdtQR8

Over the past few years, the word ‘innovation’ has degenerated into something of a buzzword. In fact, according to Vijay Vaitheeswaran, US business editor at The Economist, it’s one of the most abused words in the English language.

The word is over-used precisely because we’re living in a great age of invention. But the pace at which those inventions are changing our lives is fast, new, and scary.

So what strategies do companies need to adopt to make sure technology leads to growth that’s not only profitable, but positive? How can business and government best collaborate? Can policymakers regulate the market without suppressing innovation? Which technologies will impact us most, and how soon?

At The Economist Innovation Summit in Chicago last week, entrepreneurs, thought leaders, policymakers, and academics shared their insights on the current state of exponential technologies, and the steps companies and individuals should be taking to ensure a tech-positive future. Here’s their expert take on the tech and trends shaping the future.

Blockchain

There’s been a lot of hype around blockchain; apparently it can be used for everything from distributing aid to refugees to voting. However, it’s too often conflated with cryptocurrencies like Bitcoin, and we haven’t heard of many use cases. Where does the technology currently stand?

Julie Sweet, chief executive of Accenture North America, emphasized that the technology is still in its infancy. “Everything we see today are pilots,” she said. The most promising of these pilots are taking place across three different areas: supply chain, identity, and financial services.

When you buy something from outside the US, Sweet explained, it goes through about 80 different parties. 70 percent of the relevant data is replicated and is prone to error, with paper-based documents often to blame. Blockchain is providing a secure way to eliminate paper in supply chains, upping accuracy and cutting costs in the process.

One of the most prominent use cases in the US is Walmart—the company has mandated that all suppliers in its leafy greens segment be on a blockchain, and its food safety has improved as a result.

Beth Devin, head of Citi Ventures’ innovation network, added “Blockchain is an infrastructure technology. It can be leveraged in a lot of ways. There’s so much opportunity to create new types of assets and securities that aren’t accessible to people today. But there’s a lot to figure out around governance.”

Open Source Technology

Are the days of proprietary technology numbered? More and more companies and individuals are making their source code publicly available, and its benefits are thus more widespread than ever before. But what are the limitations and challenges of open source tech, and where might it go in the near future?

Bob Lord, senior VP of cognitive applications at IBM, is a believer. “Open-sourcing technology helps innovation occur, and it’s a fundamental basis for creating great technology solutions for the world,” he said. However, the biggest challenge for open source right now is that companies are taking out more than they’re contributing back to the open-source world. Lord pointed out that IBM has a rule about how many lines of code employees take out relative to how many lines they put in.

Another challenge area is open governance; blockchain by its very nature should be transparent and decentralized, with multiple parties making decisions and being held accountable. “We have to embrace open governance at the same time that we’re contributing,” Lord said. He advocated for a hybrid-cloud environment where people can access public and private data and bring it together.

Augmented and Virtual Reality

Augmented and virtual reality aren’t just for fun and games anymore, and they’ll be even less so in the near future. According to Pearly Chen, vice president at HTC, they’ll also go from being two different things to being one and the same. “AR overlays digital information on top of the real world, and VR transports you to a different world,” she said. “In the near future we will not need to delineate between these two activities; AR and VR will come together naturally, and will change everything we do as we know it today.”

For that to happen, we’ll need a more ergonomically friendly device than we have today for interacting with this technology. “Whenever we use tech today, we’re multitasking,” said product designer and futurist Jody Medich. “When you’re using GPS, you’re trying to navigate in the real world and also manage this screen. Constant task-switching is killing our brain’s ability to think.” Augmented and virtual reality, she believes, will allow us to adapt technology to match our brain’s functionality.

This all sounds like a lot of fun for uses like gaming and entertainment, but what about practical applications?  “Ultimately what we care about is how this technology will improve lives,” Chen said.

A few ways that could happen? Extended reality will be used to simulate hazardous real-life scenarios, reduce the time and resources needed to bring a product to market, train healthcare professionals (such as surgeons), or provide therapies for patients—not to mention education. “Think about the possibilities for children to learn about history, science, or math in ways they can’t today,” Chen said.

Quantum Computing

If there’s one technology that’s truly baffling, it’s quantum computing. Qubits, entanglement, quantum states—it’s hard to wrap our heads around these concepts, but they hold great promise. Where is the tech right now?

Mandy Birch, head of engineering strategy at Rigetti Computing, thinks quantum development is starting slowly but will accelerate quickly. “We’re at the innovation stage right now, trying to match this capability to useful applications,” she said. “Can we solve problems cheaper, better, and faster than classical computers can do?” She believes quantum’s first breakthrough will happen in two to five years, and that is highest potential is in applications like routing, supply chain, and risk optimization, followed by quantum chemistry (for materials science and medicine) and machine learning.

David Awschalom, director of the Chicago Quantum Exchange and senior scientist at Argonne National Laboratory, believes quantum communication and quantum sensing will become a reality in three to seven years. “We’ll use states of matter to encrypt information in ways that are completely secure,” he said. A quantum voting system, currently being prototyped, is one application.

Who should be driving quantum tech development? The panelists emphasized that no one entity will get very far alone. “Advancing quantum tech will require collaboration not only between business, academia, and government, but between nations,” said Linda Sapochak, division director of materials research at the National Science Foundation. She added that this doesn’t just go for the technology itself—setting up the infrastructure for quantum will be a big challenge as well.

Space

Space has always been the final frontier, and it still is—but it’s not quite as far-removed from our daily lives now as it was when Neil Armstrong walked on the moon in 1969.

The space industry has always been funded by governments and private defense contractors. But in 2009, SpaceX launched its first commercial satellite, and in subsequent years have drastically cut the cost of spaceflight. More importantly, they published their pricing, which brought transparency to a market that hadn’t seen it before.

Entrepreneurs around the world started putting together business plans, and there are now over 400 privately-funded space companies, many with consumer applications.

Chad Anderson, CEO of Space Angels and managing partner of Space Capital, pointed out that the technology floating around in space was, until recently, archaic. “A few NASA engineers saw they had more computing power in their phone than there was in satellites,” he said. “So they thought, ‘why don’t we just fly an iPhone?’” They did—and it worked.

Now companies have networks of satellites monitoring the whole planet, producing a huge amount of data that’s valuable for countless applications like agriculture, shipping, and observation. “A lot of people underestimate space,” Anderson said. “It’s already enabling our modern global marketplace.”

Next up in the space realm, he predicts, are mining and tourism.

Artificial Intelligence and the Future of Work

From the US to Europe to Asia, alarms are sounding about AI taking our jobs. What will be left for humans to do once machines can do everything—and do it better?

These fears may be unfounded, though, and are certainly exaggerated. It’s undeniable that AI and automation are changing the employment landscape (not to mention the way companies do business and the way we live our lives), but if we build these tools the right way, they’ll bring more good than harm, and more productivity than obsolescence.

Accenture’s Julie Sweet emphasized that AI alone is not what’s disrupting business and employment. Rather, it’s what she called the “triple A”: automation, analytics, and artificial intelligence. But even this fear-inducing trifecta of terms doesn’t spell doom, for workers or for companies. Accenture has automated 40,000 jobs—and hasn’t fired anyone in the process. Instead, they’ve trained and up-skilled people. The most important drivers to scale this, Sweet said, are a commitment by companies and government support (such as tax credits).

Imbuing AI with the best of human values will also be critical to its impact on our future. Tracy Frey, Google Cloud AI’s director of strategy, cited the company’s set of seven AI principles. “What’s important is the governance process that’s put in place to support those principles,” she said. “You can’t make macro decisions when you have technology that can be applied in many different ways.”

High Risks, High Stakes

This year, Vaitheeswaran said, 50 percent of the world’s population will have internet access (he added that he’s disappointed that percentage isn’t higher given the proliferation of smartphones). As technology becomes more widely available to people around the world and its influence grows even more, what are the biggest risks we should be monitoring and controlling?

Information integrity—being able to tell what’s real from what’s fake—is a crucial one. “We’re increasingly operating in siloed realities,” said Renee DiResta, director of research at New Knowledge and head of policy at Data for Democracy. “Inadvertent algorithmic amplification on social media elevates certain perspectives—what does that do to us as a society?”

Algorithms have also already been proven to perpetuate the bias of the people who create it—and those people are often wealthy, white, and male. Ensuring that technology doesn’t propagate unfair bias will be crucial to its ability to serve a diverse population, and to keep societies from becoming further polarized and inequitable. The polarization of experience that results from pronounced inequalities within countries, Vaitheeswaran pointed out, can end up undermining democracy.

We’ll also need to walk the line between privacy and utility very carefully. As Dan Wagner, founder of Civis Analytics put it, “We want to ensure privacy as much as possible, but open access to information helps us achieve important social good.” Medicine in the US has been hampered by privacy laws; if, for example, we had more data about biomarkers around cancer, we could provide more accurate predictions and ultimately better healthcare.

But going the Chinese way—a total lack of privacy—is likely not the answer, either. “We have to be very careful about the way we bake rights and freedom into our technology,” said Alex Gladstein, chief strategy officer at Human Rights Foundation.

Technology’s risks are clearly as fraught as its potential is promising. As Gary Shapiro, chief executive of the Consumer Technology Association, put it, “Everything we’ve talked about today is simply a tool, and can be used for good or bad.”

The decisions we’re making now, at every level—from the engineers writing algorithms, to the legislators writing laws, to the teenagers writing clever Instagram captions—will determine where on the spectrum we end up.

Image Credit: Rudy Balasko / Shutterstock.com

Think.intellect

via Singularity Hub https://meson.in/2EASxAx

March 14, 2019 at 11:01PM

How celebrities have fuelled the amazing rise in pseudoscience

How celebrities have fuelled the amazing rise in pseudoscience

https://meson.in/2HdTXFZ

Cryotherapy

Cool amusement: will cryotherapy and other treatments help Timothy Caulfield live forever? Probably not

Peacock Alley Entertainment

By Wendy Glauser

FOR the past decade, Timothy Caulfield, a professor of health law in Alberta, Canada, has been waging war on pseudoscience. He has written books on vaccination myths and about our uncritical relationship to medicine, most famously in Is Gwyneth Paltrow Wrong About Everything?

He is big on Twitter, and now on television, too. Each episode of his series A User’s Guide to Cheating Death delves into the ways people are trying to live longer or look younger, either through …

Bio.medical

via New Scientist – Health https://meson.in/2AA4I2U

March 10, 2019 at 05:35PM

OpenAI’s Eerily Realistic New Text Generator Writes Like a Human

OpenAI’s Eerily Realistic New Text Generator Writes Like a Human

https://meson.in/2NV36UA

Trying to understand how new technologies will shape our lives is an exercise in managing hype. When technologists say their new invention has the potential to change the world, you’d hardly expect them to say anything else. But when they say they’re so concerned about its potential to change the world that they won’t release their invention, you sit up and pay attention.

This was the case when OpenAI, the non-profit founded in 2015 by Y Combinator’s Sam Altman and Elon Musk (amongst others), announced its new neural network for natural language processing: the GPT-2. In a blog post, along with some striking examples of its work, OpenAI announced that this neural network would not be released to the public, citing concerns around its security.

More Data, Better Data

In its outline, GPT-2 resembles the strategy that natural language processing neural networks have often employed: trained on a huge 40GB text sample drawn from the internet, the neural network statistically associates words and patterns of words with each other. It can then attempt to predict the next word in a sequence based on previous words, generating samples of new text. So far, so familiar: people have marveled at the ability of neural networks to generate text for some years. They’ve been trained to write novels and come up with recipes for our amusement.

But GPT-2 appears to be a step ahead of its predecessors. It’s not entirely clear why, in part due to the refusal to release the whole model; but it appears to simply represent a scaling-up of previous OpenAI efforts, using a neural network design that has existed for a couple of years. That means more CPU hours, more fine-tuning, and a larger training dataset.

The data is scraped from the internet, but with a twist: the researchers kept the quality high by scraping from outbound links from Reddit that got more than three upvotes—so if you’re a Reddit user, you helped GPT-2 find and clean its data.

The work of previous RNNs (Recurrent Neural Networks) often felt as if the vast samples of classic literature, or death metal band names, or Shakespeare, had been put through a blender then hastily reassembled by someone who’d only glanced at the original.

This is why talking to AI chatbots can be so frustrating; they cannot retain context, because they have no innate understanding of anything they’re talking about beyond these statistical associations between words.

GPT-2 operates on similar principles: it has no real understanding of what it’s talking about, or of any word or concept as anything more than a vector in a huge vector space, vastly distant from some and intimately close to others. But, for certain purposes, this might not matter.

Unicorn-Chasing

When prompted to write about unicorns that could speak English, GPT-2 (admittedly, after ten attempts) came up with a page of text like this:

“Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

“While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

“However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution or at least a change in social organization,” said the scientist.”

What’s really notable about this sample is the overarching structure of it: it reads almost exactly as a normal scientific article or write-up of a press release would. The RNN doesn’t contradict itself or lose its flow in the middle of a sentence. Its references to location are consistent, as are the particular “topics” of discussion in each paragraph. GTP-2 is not explicitly programmed to remember (or invent) Dr. Perez’s name, for example—yet it does.

The unicorn sample is a particularly striking example, but the RNN’s capabilities also allowed it to produce a fairly convincing article about itself. With no real understanding of the underlying concepts or facts of the matter, the piece has the ring of tech journalism, but is entirely untrue (thankfully, otherwise I’d be out of a job already).

The OpenAI researchers note that, like all neural networks, the computational resources used to train the network and the size of its sample determine its performance. OpenAI’s blog post explains: “When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50 percent of the time.”

Rewriting the World

However, when trained on specifically-selected datasets for narrower applications, the AI becomes more convincing. An example of the niche applications the OpenAI researchers trained the model to perform was writing Amazon reviews. This kind of convincing generation of online content was what led OpenAI to decide not to release the algorithm for general use.

This decision has been controversial, with some cynics suggesting that it’s a publicity stunt designed to get more articles written to overhype OpenAI’s progress. But there’s no need for an algorithm to be particularly intelligent to shape the world—as long as it’s capable of fooling people.

Deepfake videos, especially in these polarized times, could be disruptive enough, but the complexity of a video can make it easier to spot the “artefacts,” the fingerprints left by the algorithms that generate them.

Not so with text. If GPT-2 can generate endless, coherent, and convincing fake news or propaganda bots online, it will do more than put some Macedonian teens out of a job. Clearly, there is space for remarkable improvements: could AI write articles, novels, or poetry that some people prefer to read?

The long-term impacts on society for such a system are difficult to comprehend. The time is well overdue that the machine learning field abandon its ‘move fast and break things’ approach in releasing algorithms that have potentially damaging social impacts. An ethical debate about the software we release is just as important as ethical debates about new advances in biotechnology or weapons manufacture.

GPT-2 hasn’t yet eliminated some of the perennial bugbears associated with RNNs. Occasionally, for example, it will repeat words, unnaturally switch topics, or say things that don’t make sense due to poor word modeling: “The fire is happening under water,” for example.

Unreasonable Reason

Yet one of the most exciting aspects of the RNN is its apparent ability to develop what you might call “emergent skills” that weren’t specifically programmed. The algorithm was never explicitly programmed to translate between languages or summarize longer articles, but can have a decent stab at both tasks simply based on the enormity of its training dataset.

In that dataset were plenty of examples of long pieces of text, followed by “TL;DR.” If you prompt GPT-2 with the phrase “TL;DR”, it will attempt to summarize the preceding text. It was not designed for this task, and so it’s a pretty terrible summarizer, falling well short of how the best summarizing algorithms can perform.

Yet the fact that it will even attempt this task with no specific training shows just how much behavior, structure, and logic these neural networks can extract from their training datasets. In the endless quest to determine “which-word-comes-next” as a byproduct, it appears to develop a vague notion of what it is supposed to do in this tl;dr situation. This is unexpected, and exciting.

You can download and play with a toy version of GPT-2 from Github.

Image Credit: Photo Kozyr / Shutterstock.com

Think.intellect

via Singularity Hub https://meson.in/2EASxAx

March 8, 2019 at 12:13AM

How Three People With HIV Became Virus-Free Without Drugs

How Three People With HIV Became Virus-Free Without Drugs

https://meson.in/2NUU89Y

You’re not entirely human.

Our DNA contains roughly 100,000 pieces of viral DNA, totaling 8 percent of our entire genome. Most are ancient relics from long-forgotten invasions; but to HIV patients, the viral attacks are very real and entirely prescient to every moment of their lives.

HIV is the virus that causes AIDS—the horrifying disease that cruelly eats away at the immune system. As a “retrovirus,” the virus inserts its own genetic material into a cell’s DNA, and hijacks the cell’s own protein-making machinery to spew out copies of itself. It’s the ultimate parasite.

An HIV diagnosis in the 80s was a death sentence; nowadays, thanks to combination therapy—undoubtedly one of medicine’s brightest triumphs—the virus can be kept at bay. That is, until it mutates, evades the drugs, propagates, and strikes again. That’s why doctors never say an HIV patient is “cured,” even if the viral load is undetectable in the blood.

Except for one. Dubbed the “Berlin Patient,” Timothy Ray Brown, an HIV-positive cancer patient, received a total blood stem cell transplant to treat his aggressive blood cancer back in 2008. He came out of the surgery not just free of cancer—but also free of HIV.

Now, two new cases suggest Brown isn’t a medical unicorn. One study, published Tuesday in Nature, followed an HIV-positive patient with Hodgkin’s lymphoma, a white blood cell cancer, for over two years after a bone marrow transplant. The “London patient” remained virus-free for 18 months after quitting his anti-HIV drugs, making him the second person ever to beat back the virus without drugs.

The other, presented at the Conference on Retroviruses and Opportunistic Infections in Washington, also received a stem cell transplant to treat his leukemia while controlling his HIV load using drugs. He stopped anti-virals in November 2018—and doctors only found traces of the virus’s genetic material, even when using a myriad of ultra-sensitive techniques.

Does this mean a cure for HIV is in sight? Here’s what you need to know.

Is There a Cure on the Horizon?

Sadly, no. Stem cell transplant, often in the form of a bone marrow transplant, is swapping one evil out with another. The dangerous surgery requires extensive immunosuppression afterwards and is far too intensive as an everyday treatment, especially because most HIV cases can be managed with antiviral therapy.

Why Did Stem Cell Transplants Treat HIV, Anyways?

The common denominator among the three is that they all received blood stem cell transplants for blood cancer. Warding off HIV was almost a lucky side-effect.

I say “almost” because the type of stem cells the patients received were different than their own. If you picture an HIV virus as an Amazon delivery box, the box needs to dock to the recipient–the cell’s outer surface—before the virus injects its DNA cargo. The docking process involves a bunch of molecules, but CCR5 is a critical one. For roughly 50 percent of all HIV virus strains, CCR5 is absolutely necessary for the virus to get into a type of immune cell called the T cell and kick off its reproduction.

No CCR5, no HIV swarm, no AIDS.

If CCR5 sounds familiar, that may be because it was the target in the CRISPR baby scandal, in which a rogue Chinese scientist edited the receptor in an ill-fated attempt to make a pair of twins immune to HIV (he botched it).

As it happens, roughly 10 percent of northern Europeans carry a mutation in their CCR5 that make them naturally resistant to HIV. The mutant, CCR5 Δ32, lacks a key component that prevents HIV from docking.

Here’s the key: all three seemingly “cured” patients received stem cells from matching donors who naturally had the CCR5 Δ32 to treat their cancer. Once settled into their new hosts, blood stem cells activated and essentially repopulated the entire blood system—immune cells included—with the HIV-resistant super-cells. Hence, bye bye virus.

But Are Mutant Stem Cells Really the Cure?

Here’s where the story gets complicated.

In theory—and it is a good one—lack of full-on CCR5 is why the patients were able to beat back HIV even after withdrawing their anti-viral meds.

But other factors could be at play. Back in the late 2000s, Brown underwent extensive full-body radiation to eradicate his cancerous cells, and received two bone marrow transplants. To ward off his body rejecting the cells, he took extremely harsh immunosuppressants that are no longer on the market because of their toxicity. The turmoil nearly killed him.

Because Brown’s immune system was almost completely destroyed and rebuilt, it led scientists to wonder if near-death was necessary to reboot the body and make it free of HIV.

Happily, the two new cases suggest it’s not. Although the two patients did receive chemotherapy for their cancer, the drugs specifically targeted their blood cells to clear them out and “make way” for the new transplant population.

Yet between Brown and the London patient, others have tried replicating the process. But everyone failed, in that the virus came back after withdrawing anti-viral drugs.

Scientists aren’t completely sure why they failed. One theory is that the source of blood stem cells matters, in the sense that grafted cells need to induce an immune response called graft-versus-host.

As the name implies, here the new cells viciously attack the host—something that doctors usually try to avoid. But in this case, the immune attack may be responsible for wiping out the last HIV-infected T cells, the “HIV reservoir,” allowing the host’s immune system to repopulate with a clean slate.

Complicating things even more, a small trial transplanting cell with normal CCR5 into HIV-positive blood cancer patients also found that the body was able to fight back the HIV onslaught—up to 88 months in one patient. Because immunosuppressants both limit the graft-versus-host/HIV attack and prevent HIV from infecting new cells, the authors suggest that time and dosage of these drugs could be essential to success.

One more ingredient further complicates the biological soup: only about half of HIV strains use CCR5 to enter cells. Other types, such as X4, rely on other proteins for entry. With CCR5 gone, these alternate strains could take over the body, perhaps more viciously without competition from their brethren.

So the New Patients Don’t Matter?

Yes, they do. The London patient is the first since Brown to live without detectable HIV load for over a year. This suggests that Brown isn’t a fluke—CCR5 is absolutely a good treatment target for further investigation.

That’s not to say the two patients are cured. Because HIV is currently only manageable, scientists don’t yet have a good definition of “cured.” Brown, now 12 years free of HIV, is by consensus the only one that fits the bill. The two new cases, though promising, are still considered in long-term remission.

As of now there are no accepted standards on how long a patient needs to be HIV-free before he is considered cured. What’s more, there are multiple ways to detect HIV load in the body—the Düsseldorf patient, for example, showed low signals of the virus using ultrasensitive tests. Whether the detected bits are enough to launch another HIV assault is anyone’s guess.

But the two new proof-of-concepts jolt the HIV-research sphere into a new era of hope with a promise: the disease, affecting 37 million people worldwide, can be cured.

What Next?

More cases may be soon to come.

The two cases were part of the IciStem program, a European collaboration that guides investigations into using stem cell transplantation as a cure for HIV. As of now, they have over 22,000 donors with the beneficial CCR5 Δ32 mutation, with 39 HIV-positive patients who have received transplants. More cases will build stronger evidence that the approach works.

However, stem cell transplants are obviously not practical as an everyday treatment option. But biotech companies are already actively pursuing CCR5-based leads in a two-pronged approach: one, attack the HIV reservoir of cells; two, supply the body with brand new replacements.

Translation? Use any method to get rid of CCR5 cells in immune cells.

Sangamo, based in California, is perhaps the most prominent player. In one trial, they edited CCR5 from extracted blood cells before infusing them back into the body—a sort of CAR-T for HIV. The number of edited cells weren’t enough to beat back HIV, but did clear out a large pool of the virus before it bounced back. With the advent of CRISPR making the necessary edits more efficient, more trials are already in the works.

Other efforts, expertly summarized by the New York Times, include making stem cells resistant to HIV—acting as a lifelong well of immune cells resistant to the virus—or using antibodies against CCR5.

Whatever the treatment, any therapy that targets CCR5 also has to consider this: deletion of the gene in the brain has cognitive effects, in that it enhances cognition (in mice) and improves brain recovery after stroke. For side effects, these are pretty awesome. But they also highlight just how little we still know about how the gene works outside the immune system.

Final Takeaway?

Despite all the complexities, these two promising cases add hope to an oft-beaten research community. Dr. Annemarie Wensing at the University Medical Center Utrecht summarized it well: “This will inspire people that a cure is not a dream. It’s reachable.”

Image Credit: Kateryna Kon / Shutterstock.com

Think.intellect

via Singularity Hub https://meson.in/2EASxAx

March 10, 2019 at 11:01PM

Air pollution: Cars should be banned near schools says public health chief

Air pollution: Cars should be banned near schools says public health chief

https://meson.in/2O07j9G

Exhaust pipeImage copyright Getty Images

Public health chiefs have called for cars to be banned around schools in the UK, reports say.

Paul Cosford, the medical director of Public Health England, told the Times it should be socially unacceptable to leave a car running near school gates.

The comments came as PHE published a series of recommendations on how the government can improve air quality.

PHE said 28,000 to 36,000 deaths a year in the UK could be attributed to long-term exposure to air pollution.

It is also calling for congestion charges to be imposed in cities across the UK.

It describes air pollution as the biggest environmental threat to health in the UK and says there is strong evidence that air pollution causes the development of coronary heart disease, stroke, respiratory disease and lung cancer, and exacerbates asthma.

In its review, it recommends:

  • Redesigning cities so people aren’t so close to highly polluting roads by, for example, designing wider streets or using hedges to screen against pollutants
  • Investing more in clean public transport as well as foot and cycle paths
  • Encouraging uptake of low emission vehicles by setting more ambitious targets for installing electric car charging points
  • Discouraging highly polluting vehicles from entering populated areas with incentives such as low emission or clean air zones

Media playback is unsupported on your device

Media captionUK scientists estimate air pollution cuts British people’s lives by an average of six months

Prof Cosford said: “Transport and urban planners will need to work together with others involved in air pollution to ensure that new initiatives have a positive impact.

“Decision makers should carefully design policies to make sure that the poorest in society are protected against the financial implications of new schemes.”

PHE said that national government policy could support these local actions – for example, they could allow controls on industrial emissions in populated areas to take account of health impacts.

Science.general

via BBC News – Science & Environment https://meson.in/2Pv3gCp

March 11, 2019 at 03:06PM