Tag Archives: Feedly

The world’s scariest economist?

The world’s scariest economist?

https://meson.in/2ZmY605

Mariana Mazzucato is one of the world’s most influential economists, according to Quartz magazine.  She has won many awards for her work.  She is an adviser to the UK Labour Party on economic policy; she “has the ear” of radical Congress representative Alexandria Ocasio-Cortez, she advises Democratic presidential hopeful, Senator Elizabeth Warren and also Scottish Nationalist leader Nicola Sturgeon.  And she has written two key books: The Entrepreneurial State (2013) and the The Value of Everything (2018).

Mazzucato is considered radical, even ‘scary,’ by many mainstream economists and conservative politicians.  This is because she has highlighted the important role that the state and governments have played in delivering innovation in technology and in advancing productive investment.  The idea that the state can be a leading force in innovation and investment in useful activity is anathema to the right-wing neo-liberal ‘free market’ views of the majority of mainstream economists and politicians.

In earlier posts, I highlighted her important insights into how government investment and direction were essential to the development pf the new technologies of the internet, the worldwide web, microsoft, apple, the iphone etc.  The IPhone for example was developed using public funds and military procurement projects for microprocessors.  The innovators were publicly funded universities and research institutes, not clever entrepreneur capitalists.  Indeed, there is nothing new about government or state funding for the most important innovations in capitalist accumulation.  The technological advances made during the first and second world wars using government ‘defence’ funds were huge: jet aircraft, radar, telecomms, vehicle construction etc.

So it is no accident that the sharp fall in government investment to GDP in most advanced capitalist economies in the so-called neoloiberal period since the early 1980s has been accompanied by slowing productivity growth.

Capitalist sector investment has failed to deliver faster productivity per person since 1980s than in the period of higher government investment before.  Falling profitability in the 1970s in all the major economies led to cutting state sector investment in technology and ‘human capital’ in order to reduce taxes on capital and keep wages down.  Indeed, privatisation was the order of the day. That helped profitability in the capitalist sector a little (along with successive slumps), but at the expense of productivity growth.

As Mazzucato makes clear in her second book, The Value of Everything, government investment and production does create value ie things or services we need and is not just a (necessary) cost. But as I commented in a review of that book, in Marxist terms, Mazzucato conflates value with use-value.  Yes, government investment in schools, hospitals, transport, infrastructure and technology creates useful things, but under the capitalist mode of production for profit, it does not create value (surplus value or profit).  On the contrary, it can lower overall profitability for the capitalist sector. So there is an inherent contradiction in capitalism between more use-value and value.

Unfortunately, this is not recognised in Mazzucato’s work.  As a result, she sees her task as an economist to show how government can help to make capitalism work by getting governments to create more ‘value’.   For Mazzucato, governments can do more “than play a passive role in fixing market failures” (I doubt that it can even do that – MR) but instead “be allowed to embrace entrepreneurial spirit to steer the direction of innovation and economic growth”.  She wants governments to have missions “to get shit done”.  Now this sounds scary to the mainstream but they need not worry.  Mazzucato does not advocate replacing capitalism with socialism – as she says “I don’t think these words are helpful… there are all sorts of different ways to do capitalism.. that’s what I think needs completely rebooting rather than to start calling things socialism”. Here she echoes the approach of Elizabeth Warren.

Capitalism, socialism; what’s in a name?  Well, behind a name lies a categorisation of the structure of a mode of production and social relations.  Mazzucato wants capitalism to deliver more and better things and services for people, but without touching the private ownership of the means of production.  And talking about replacing capitalist companies with common ownership, planning and workers democracy would be a mistake. “If you start talking about socialism, it’s not going to make companies do anything different from what they’re doing now.”  But will suggesting that big business invest productively without taking into account “shareholder value” work either?

For Mazzucato, socialism is a nice idea but not practical.  “Regardless of what I would like to see in an ideal world, I think realistically we’ll have capitalism”. The problem I have with that conclusion is that being ‘realistic’ and accepting that capitalism will be here for the foreseeable future and so trying to make it work better is what is not realistic!  Under capitalism, can regular and recurring economic slumps be avoided that cost many millions their jobs, homes and livelihoods in every generation?  Can imperialist adventures and exploitations be avoided?  Can extreme inequality of wealth and income be reversed?  Can climate change and global warming be stopped?

Can any of these horrors realistically be removed by getting governments and multi-nationals to have ‘missions’ to ‘get shit done’ while still preserving the capitalist system of production and investment for private profit? That is what is unrealistic.  But it is safer to talk about saving capitalism from itself or making it work better with the help of government than replacing capitalism.  The latter would really be scary for the existing order.

 

 

Econo.Progressive

via Michael Roberts Blog https://meson.in/2ErIaA0

July 26, 2019 at 08:06PM

The brain’s drain: how our brains flush out their waste and toxins

The brain’s drain: how our brains flush out their waste and toxins

https://meson.in/2OntlH2

Meningeal lymphatic vessels remove toxins from the brain

Meningeal lymphatic vessels

Ji Hoon Ahn and Hyunsoo Cho/Institute for Basic Science Center for Vascular Research, Daejeon, South Korea

By Chelsea Whyte

How does the brain clean itself? We now know a major route for clearing toxins out from the brain, and the finding could help us understand what goes wrong in age-related conditions such as Alzheimer’s disease.

Whereabouts cerebrospinal fluid enters and exits the brain has been a long-standing enigma, says Gou Young Koh at Korea Advanced Institute of Science and Technology in South Korea. In 2014, a network of vessels called the meningeal lymphatic vessels in the outer brain membrane were found to play a part in regulating the brain’s fluids, flushing out excess proteins that can build up in the brain.

However, because of the brain’s complex structure, it remained unclear where the majority of this drainage occurs.

Advertisement

Age effect

To find out which routes the fluid takes, Koh and his colleagues injected dye and tracer quantum dots into the cerbrospinal fluid of mice and then traced where it flowed out of the brain using brain scans. They found that the basal meningeal lymphatic vessels allow cerebrospinal fluid to move in and out of the brain at the base of the skull, but not at the top.

The team found a significant decline in cerebrospinal fluid drainage in the brains of older mice. Animals who were two or more years old had about half the level of drainage through their basal meningeal lymphatic vessels than those aged 3 months.

“It’s amazing that this is such a basic anatomical question and we don’t know how something as important as fluid around the brain is cleared out,” says Steven Proulx at the University of Bern in Switzerland. “This is not the end of the discussion, though. Our own findings are that drainage pathways in the nasal region and even the optic region are as important or even more important than this one.”

Toxic proteins

The draining of cerebrospinal fluid is thought to be important for brain health. In conditions like Alzheimer’s disease, proteins such as amyloids can build up in the brain and may cause damage.

Proulx says there may be several routes for the brain to drain fluids, including the spine. “Knowing where this flow happens is important for understanding immune reactions that can occur through the central nervous system and how toxic proteins like amyloids could be removed,” says Proulx.

He suggests that it might be possible to use growth factors to boost drainage from the brain to treat neurodegenerative disorders.

Some research has previously suggested that toxins are mostly flushed out from the brain during sleep, but other studies have found this evidence to be inconclusive.

Journal reference: Nature, DOI: 10.1038/s41586-019-1419-5

More on these topics:

Bio.medical

via New Scientist – Health https://meson.in/2AA4I2U

July 25, 2019 at 07:09AM

Genetic screen identifies genes that protect cells from Zika virus

Genetic screen identifies genes that protect cells from Zika virus

https://meson.in/2ygrSr6

A new study uses a genetic screen to identify genes that protect cells from Zika viral infection. The research may one day lead to the development of a treatment for Zika and other infections.

Bio.technology

via ScienceDaily: Biotechnology News https://meson.in/2CjfWYX

July 26, 2019 at 02:20AM

Editing RNA Expands CRISPR’s Use Far Beyond Genetic Diseases

Editing RNA Expands CRISPR’s Use Far Beyond Genetic Diseases

https://meson.in/2GxD0o2

CRISPR advances have been coming so frequently that it’s hard to keep track.

In just a few years, it’s evolved from a nifty genome word editor to a full-on biological Swiss army knife. There’s the classic shutdown-that-faulty-gene version. There’s the change-and-replace-single-DNA-letters version. There are even spinoffs that let you add a gene, edit a bunch of genes, or irreversibly alter the genetic information of an entire species.

But before your eyes glaze over: this new family of upgrades is fundamentally different.

Rather than targeting DNA, a team at MIT repurposed CRISPR to edit single letters in RNA, the messengers that carry DNA information to the protein-building parts of the cell. Without RNA, most of DNA’s coding is moot: it’s similar to writing pages of software code, only unable to compile it into an executable program.

The effort is led by the legendary Dr. Feng Zhang, who was one of the first to realize CRISPR’s powerful editing abilities in mammalian cells. The tool, RESCUE, builds on Zhang’s previous attempt at using CRISPR to precisely swap one RNA letter for another—already hailed as a “tour de force” by outside experts.

This time, however, the editing is multiplexed. RESCUE can swap two letter pairs at the same time, doubling the number of disease-causing mutations translated by RNA that can be neutralized. Even more valuably, the tool can fundamentally change the way molecules in our cells communicate information, amping up—or temporarily blocking—the delicate amorphous “phone lines” that tell cancer cells to grow, or neurons to wither away from Alzheimer’s and other diseases.

“To treat the diversity of genetic changes that cause disease, we need an array of precise technologies to choose from…we were able to fill a critical gap in the toolbox,” said Zhang.

RNA: Biomedicine’s Frontier

Zhang’s results are neat—but they’re not the takeaway. To understand why they matter, it helps to gain a broader perspective on why scientists are eager to target RNA in the first place.

Think of RNA as a CliffsNotesversion of DNA. When a gene needs its message heard, it recruits a group of middlemen to rapidly build short, clover-leaf-shaped RNA strands from scratch, which faithfully contain all the “coding” information in a gene to make a protein. Similar to DNA, RNA also has four letters—A, G, C, and U, which acts like DNA’s T—that bind with DNA in specific pair-wise ways. Three-letter combinations in RNA form a dictionary that mostly encodes different proteins; occasionally it means “stop.” In all, a total of 64 combinations of RNA triads generate 20 different proteins, forming a second version of life’s base code.

RNA skyrocketed into prominence as a way to control gene expression almost two decades ago, and last year, the FDA finally approved the first RNA-targeting gene silencing drug. Despite the exploding popularity of DNA-focused CRISPR, however, targeting RNA never lost steam, for three main reasons.

One, it’s the no-commitment gene therapy alternative. Because RNA rapidly regenerates, any mistakes in editing will wash out within hours, allowing scientists to quickly scan for another alternative.

Two, it achieves the same outcome as genome editing without adding risk. Without editing the genome—which, as we’ve seen in CRISPR babies, can go very wrong—there’s no risk of triggering permanent cancerous mutations or other lifelong side effects.

Three, targeting RNA can alter hotspots on a protein that are essential to its function. Not to overwhelm you with too much biochemistry, but proteins often “talk” to each other by adding—or deleting—certain chemical groups. It’s like either putting up or removing a “do not disturb” sign on your hotel room door—the cell’s staff will know whether or not to continue with their tasks. This is huge.

Life runs on these signs: should a brain cell die after a stroke? Should neurons build up protein clumps that further trigger neurodegeneration? Should that cancer cell keep dividing? These chemical signs are a goldmine for treating all sorts of diseases. RNA-editing CRISPR is a simple, robust, and effective way to open them up for intervention.

REPAIR and RESCUE

Back in late 2017, Zhang’s team described the first CRISPR alternative that snips RNA into bits, simultaneously destroying any carried genetic information in the molecules. Just a month later, they presented the first RNA base-editing tool: REPAIR, a CRISPR variant that precisely changes the letter “A” to an artificial form of “G.”

In humans, a G-to-A mutation is extremely common, implicated in health conditions such as epilepsy, Parkinson’s disease, and Duchenne muscular dystrophy. The new tool re-jiggles those mutations into benevolent forms while leaving the letters surrounding the troubled area alone.

To build REPAIR, the team strung two parts together like a molecular buddy cop tag-team. One is Cas13, a CRISPR family “scissor” protein that likes to cut RNA instead of DNA. The team made a neutered version that stripped the protein of its cutting ability, but retained its capability to grasp onto specific RNA sequences. They then chemically linked the Cas13 mutant with ADAR2, a protein word processor that forces A to I. Together, the deactivated Cas13 hunts down a target sequence in RNA, and ADAR2 swaps the letters.

The new system RESCUE uses REPAIR as its template. By analyzing the structure of ADAR2, the team made a few educated guesses to gradually change its activity, so that the protein learns to turn “C” to “U” in an RNA molecule. Using a process called directed evolution, they screened 16 rounds of RESCUE constructs in yeast cells until they found one with up to 80 percent editing efficiency. A quick test with 24 clinically-relevant mutant synthetic RNAs found editing proficiencies around 40 percent (however, going as low as single digits). Further optimization reduced off-target hits to around 100 without disrupting on-target abilities

In cultured human cells, the team found they could efficiently alter the cell’s “do not disturb” molecular signs using RESCUE. In one case the team was able to boost cell growth, similar to the process seen in prostate and other cancers. Another interesting target is APOE2, the team explained, which increases the risk of Alzheimer’s. RESCUE could, in theory, alter the RNA transcripts so that they resemble the more brain-protective version, thus potentially helping at-risk individuals without altering their brain’s genetic profile.

An RNA Future

RESCUE combines the top traits of RNA editing with the strength and resources of CRISPR, bridging two of biomedicine’s most promising approaches into a single tool. Compared to DNA-targeting CRISPR, it could finally put “undruggable” molecular targets within reach.

To be fair, RESCUE’s efficacy and specificity need years more tinkering to be acceptable for clinical use. And because RNA regenerates, the editing effects are temporary, which could become problematic if counteracting a lifelong genetic disease. But to some, that’s a feature, not a bug—it makes the tool useful for temporary conditions such as inflammation, stroke, or infectious diseases that only need brief treatments.

“Applications of the CRISPR system to RNA are heating up,” said Dr. Gene Yeo at UCSD, who funded a startup that uses CRISPR to target and cleave RNA for incurable conditions such as Huntington’s disease. His previous efforts engineered Cas9 variants that left DNA alone while destroying toxic RNA buildup to block the progression of neurodegeneration.

“RNA targeting has many advantages, and I think this will grow much more because there are many more things you can do to RNA than DNA,” said Yeo.

Image Credit: petarg / Shutterstock.com

Think.intellect

via Singularity Hub https://meson.in/2EASxAx

July 24, 2019 at 11:01PM

Machine Learning vs. Climate Change: AI for the Greener Good

Machine Learning vs. Climate Change: AI for the Greener Good

https://meson.in/2Zch5u0

Climate change is one of the most pressing issues of our time. Despite increasing global consensus about the urgency of reducing emissions since the 1980s, they continue to rise relentlessly. We look to technology to deliver us from climate change, preferably without sacrificing economic growth.

Our optimistic—some would say techno-utopian—visions of the future involve vast arrays of solar panels, machines that suck carbon dioxide back out of the atmosphere, and replacing fossil fuels for transport and heating with electricity generated by renewable means. This is nothing less than rebuilding our civilization on stable, sustainable foundations.

Meanwhile, society is increasingly being shaped by machine learning algorithms: automating occupations, performing tasks from diagnosing illnesses to serving up adverts, and nudging people into different behaviors. So how can AI help in the fight against climate change?

“In many ways” is the answer. Just as tackling climate change involves practically every sector—agriculture, transport, architecture, energy, industry, logistics to name but a few—so machine learning solutions can find their niche to help solve some of the thousands of problems that arise. This can range from improving our understanding of the problem by making better climate models, helping businesses and industries reduce their emissions, aiding in the design of new technologies, or helping society adapt to the changes that are already on the way.

Now, a team of researchers from multiple institutions—including Coursera founder Andrew Ng, Chief Scientist of Google John Platt, and Turing Prize winner Yoshua Bengio—have published a 100-page research paper outlining some of the areas where machine learning is best-placed to make a difference.

Balancing the Grid

A classic example is in the field of renewable energy. Solar and wind are now, in most regions, the cheapest electricity generation to build, even without a price on carbon. The main barrier is intermittency: how to integrate these power sources, which vary with the weather and seasonally, onto a grid driven by human demands. Doing this efficiently allows us to minimize the amount of fossil fuels we burn, but it requires skill in forecasting both supply and demand.

Machine learning algorithms can process huge amounts of data, from real-time weather conditions to information about pollution to video streams from areas near solar panels, and can rapidly convert these into predictions for the amount of power that will be generated. Beyond just forecasts, though, machine learning algorithms can be in charge of “scheduling and dispatch”—determining which power plants should operate at any given time, and which can be switched off.

In the future, Internet of Things technologies may provide more flexibility for demand-side management: the most power-intensive processes can take place when supply peaks, avoiding wasted energy and overproduction. Electrification of transportation will also add local storage options to this more complex grid: the large batteries of electric cars could be used to power your home, and the first models that can do this are forthcoming.

Networks for Networks, Materials

Controlling such a network of supply, storage, and demand in the presence of uncertainty and streams of data from millions of different sources is a job for machine learning. Algorithms such as those that serve up ads already use mathematical infrastructure like bandit theory to decide which action is likely to maximize a given reward; they could be well adapted to control this new, greener grid if that reward is minimizing emissions, or maximizing profit for the electricity company.

Another network that might benefit from machine learning control is transportation. Cutting down on unnecessary journeys or alleviating traffic can help to reduce pollution. Uber’s algorithms already excel at matching riders to drivers, and ride sharing is another alternative means of reducing emissions from transport. As autonomous vehicles become increasingly prevalent, machine learning algorithms can optimize with emissions in mind and help cut down on the sector that accounts for a quarter of carbon dioxide emissions.

On the research and development side, machine learning is increasingly combined with physics-based models and experimental data to predict how new materials will behave. This can help us to find materials for flexible, super-efficient solar panels or LEDs by predicting which crystal structures will have the best photovoltaic properties; it can be used to design thermoelectric materials that can turn waste heat back into useful electricity; and it can be used to help find absorbent materials for those negative emissions CO2-scrubbers. One could even imagine, someday, the entire process of choosing, designing, fabricating, and testing a new crystal could be entirely automated and subject to machine learning control.

Satellites and Patrolling Paris

The Paris Agreement is much-vaunted as the main international agreement to reduce emissions. However, it is based on voluntary targets and self-reporting of emissions. Not only are there as many ways of carbon accounting as there are accountants, but there is also the potential for fraud and deception: after all, Volkswagen systematically cheated on emissions tests for years. More trust might arise if emissions could be monitored remotely.

Satellite data, including a new fleet of CO2-monitoring satellites due to be launched by the EU in the 2020s, could allow for independent measurements of CO2 to take place, helping nations take stock of their individual and collective efforts and identify key areas to work on. Churning through satellite data, particularly where it requires feature recognition, is a job that machine learning algorithms already excel at. The drive for natural gas production through fracking and other techniques has led to leaks from methane pipes, driving up concentrations of a potent greenhouse gas. But these can also be spotted with satellites.

This is not all satellite data can be used for. A large part of our uncertainty in how the climate has responded to human influence is due to clouds, which can be influenced by pollution in complex ways. ML algorithms that scan through satellite cloud data, correlating it with sources of pollution on the ground, can help us narrow down this uncertainty and hence better constrain forecasts of global temperature.

Modeling and Adaptation

Neural networks are very good at encoding subtle, statistical relationships between multiple variables. This means they can potentially be used to represent physical processes in a more computationally efficient way, allowing us to improve climate and weather models, potentially allowing us to integrate more real-world data and better representations of processes that take place on small scales into these models. This is crucial, as we rely on climate models to understand which impacts are likely to affect which regions in the future, and even to determine whether geoengineering schemes might do more harm than good. Improving these models means better decision-making.

Meanwhile, those most vulnerable to climate change live in the poorest nations, where governments are least able to adapt and extreme heatwaves, droughts, or floods are deadly. Machine learning can be used to map informal settlements from satellite data: the first step in disaster response is knowing where people actually live. When crisis hits, machine learning algorithms can trawl through aerial photography, satellite data, and even social media posts in real time, providing information to rescuers about where help is most needed. Automated monitoring of social media combined with natural language processing can tell rescuers where supplies of water and food are low, even when conventional means of communication are unreliable.

There are aims to use machine learning to help in the social side of climate change as well. Tools that allow you to optimize your own energy use, or keep track of your carbon footprint, can be improved by machine learning algorithms. Yoshua Bengio’s project aims to galvanize people into action by visualizing possible future impacts of climate change with neural networks that generate imagery of flooded homes.

Many Tools for Many Tasks

Machine learning can even be used to try to reduce the carbon footprint of… machine learning. The energy consumption from GPUs can be huge, particularly when you’re running them to do work that is useless or redundant by design. Training advanced neural networks comes with a carbon footprint of its own. But, of course, saving energy saves money as well as benefiting the environment: this is why Google seeks to use machine learning to reduce the energy footprint of its datacenters by changing operation strategy and cooling techniques.

In short, the possibilities for machine learning to help with climate change are all around us. The machine learning revolution is based on the idea that the more data we collect and process, the more statistical relationships we understand, the better decisions we can make. Climate science is heavily driven by climate data: adaptation will require policies that are tailored to the individual changes expected in each region; mitigation will require improvements in efficiency and changes in energy use in virtually every sector of society. The time is ripe to deploy some of our most advanced and exciting computational tools to help solve the outstanding challenge of our age.

Image Credit: Man As Thep / Shutterstock.com

Think.intellect

via Singularity Hub https://meson.in/2EASxAx

July 21, 2019 at 11:01PM

Making Algorithms More Like Kids: What Can Four-Year-Olds Do That AI Can’t?

Making Algorithms More Like Kids: What Can Four-Year-Olds Do That AI Can’t?

https://meson.in/2jJFgAH

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain.

Alan Turing famously wrote this in his groundbreaking 1950 paper Computing Machinery and Intelligence, and laid the framework for generations of machine learning scientists to follow. Yet, despite increasingly impressive specialized applications and breathless predictions, we’re still some distance from programs that can simulate any mind, even one much less complex than a human’s.

Perhaps the key came in what Turing said next: “Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed.” This seems, in hindsight, naive. Moravec’s paradox applies: things that seem like the height of human intellect, like a good stimulating game of chess, are easy for machines, while simple tasks can be extremely difficult. But if children are our template for the simplest general human-level intelligence we might program, then surely it makes sense for AI researchers to study the many millions of existing examples.

This is precisely what Professor Alison Gopnik and her team at Berkeley do. They seek to answer the question: how sophisticated are children as learners? Where are children still outperforming the best algorithms, and how do they do it?

General, Unsupervised Learning

Some of the answers were outlined in a recent talk at the International Conference on Machine Learning. The first and most obvious difference between four-year-olds and our best algorithms is that children are extremely good at generalizing from a small set of examples. ML algorithms are the opposite: they can extract structure from huge datasets that no human could ever process, but generally large amounts of training data are needed for good performance.

This training data usually has to be labeled, although unsupervised learning approaches are also making progress. In other words, there is often a strong “supervisory signal” coded into the algorithm and its dataset, consistently reinforcing the algorithm as it improves. Children can learn to perform generally on a wide variety of tasks with very little supervision, and they can generalize what they’ve learned to new situations they’ve never seen before.

Even in image recognition, where ML has made great strides, algorithms require a large set of images before they can confidently distinguish objects; children may only need one. How is this achieved?

Professor Gopnik and others argue that children have “abstract generative models” that explain how the world works. In other words, children have imagination: they can ask themselves abstract questions like “If I touch this sharp pin, what will happen?” And then, from very small datasets and experiences, they can anticipate the solution.

In doing so, they are correctly inferring the relationship between cause and effect from experience. Children know that the reason that this object will prick them unless handled with care is because it’s pointy, and not because it’s silver or because they found it in the kitchen. This may sound like common sense, but being able to make this kind of causal inference from small datasets is still hard for algorithms to do, especially across such a wide range of situations.

The Power of Imagination

Generative models are increasingly being employed by AI researchers—after all, the best way to show that you understand the structure and rules of a dataset is to produce examples that obey those rules. Such neural networks can compress hundreds of gigabytes of image data into hundreds of megabytes of statistical parameter weights and learn to produce images that look like the dataset. In this way, they “learn” something of the statistics of how the world works. But to do what children can and generalize with generative models is computationally infeasible, according to Gopnik.

This is far from the only trick children have up their sleeve which machine learning hopes to copy. Experiments from Professor Gopnik’s lab show that children have well-developed Bayesian reasoning abilities. Bayes’ theorem is all about assimilating new information into your assessment of what is likely to be true based on your prior knowledge. For example, finding an unfamiliar pair of underwear in your partner’s car might be a worrying sign—but if you know that they work in dry-cleaning and use the car to transport lost clothes, you might be less concerned.

Scientists at Berkeley present children with logical puzzles, such as machines that can be activated by placing different types of blocks or complicated toys that require a certain sequence of actions to light up and make music.

When they are given several examples (such as a small dataset of demonstrations of the toy), they can often infer the rules behind how the new system works from the age of three or four. These are Bayesian problems: the children efficiently assimilate the new information to help them understand the universal rules behind the toys. When the system isn’t explained, the children’s inherent curiosity leads them to experimenting with these systems—testing different combinations of actions and blocks—to quickly infer the rules behind how they work.

Indeed, it’s the curiosity of children that actually allows them to outperform adults in certain circumstances. When an incentive structure is introduced—i.e. “points” that can be gained and lost depending on your actions—adults tend to become conservative and risk-averse. Children are more concerned with understanding how the system works, and hence deploy riskier strategies. Curiosity may kill the cat, but in the right situation, it can allow children to win the game by identifying rules that adults miss by avoiding any action that might result in punishment.

To Explore or to Exploit?

This research shows not only the innate intelligence of children, but also touches on classic problems in algorithm design. The explore-exploit problem is well known in machine learning. Put simply, if you only have a certain amount of resources-time, computational ability, etc.—are you better off searching for new strategies, or simply taking the path that seems to most obviously lead to gains?

Children favor exploration over exploitation. This is how they learn—through play and experimentation with their surroundings, through keen observation and asking as many questions as they can. Children are social learners: as well as interacting with their environment, they learn from others. Anyone who has ever had to deal with a toddler endlessly using that favorite word, “why?”, will recognize this as a feature of how children learn! As we get older—kicking in around adolescence in Gopnik’s experiments—we switch to exploiting the strategies we’ve already learned rather than taking those risks.

These concepts are already being imitated in machine learning algorithms. One example is the idea of “temperature” for algorithms that look through possible solutions to a problem to find the best one. A high-temperature search is more likely to pick a random move that might initially take you further away from the reward. This means that the optimization is less likely to get “stuck” on a particular solution that’s hard to improve upon, but may not be the best out there—but it’s also slower to find a solution. Meanwhile, searches with lower temperature take fewer “risky” random moves and instead seek to refine what’s already been found.

In many ways, humans develop in the same way, from high-temperature toddlers who bounce around playing with new ideas and new solutions even when they seem strange to low-temperature adults who take fewer risks, are more methodical, but also less creative. This is how we try to program our machine learning algorithms to behave as well.

It’s nearly 70 years since Turing first suggested that we could create a general intelligence by simulating the mind of a child. The children he looked to for inspiration in 1950 are all knocking on the door of old age today. Yet, for all that machine learning and child psychology have developed over the years, there’s still a great deal that we don’t understand about how children can be such flexible, adaptive, and effective learners.

Understanding the learning process and the minds of children may help us to build better algorithms, but it could also help us to teach and nurture better and happier humans. Ultimately, isn’t that what technological progress is supposed to be about?

Image Credit: BlueBoeing / Shutterstock.com

Think.intellect

via Singularity Hub https://meson.in/2EASxAx

June 26, 2019 at 11:15PM

Cancer-Killing Living Drug Is Made Safer With a Simple Off Switch

Cancer-Killing Living Drug Is Made Safer With a Simple Off Switch

https://meson.in/2jKzSwS

When it comes to battling cancer, our most powerful weapon is also our most dangerous.

You’ve heard of CAR-T: the cellular immunotherapy extracts a patient’s own immune cells, amps up their tumor-hunting prowess using gene therapy, and infuses the super-soldiers back into the patient to pursue and rip their targets to shreds—literally. Since late 2017, the FDA has approved CAR-T therapy for leukemia and lymphoma, deadly childhood cancers generally unmanageable with classic chemotherapy or radiation. In the realm of revolutionary treatments, CAR-T absolutely fits the bill.

But there’s a serious problem.

Unlike traditional chemicals, CAR-T cells are living drugs that further proliferate inside the body. While great for replenishing their cancer-killing troops, it comes with the deadly caveat that the cells may go full-on berserk. Once unleashed, there are few ways to control their activity. In some cases, the good guys turn monstrous, releasing chemicals in a cascade that propel the body into immune overdrive. Left uncontrolled, the result is often fatal.

This week, a collaboration between the University Hospital in Würzburg, Germany and the Memorial Sloan Kettering Cancer Center in New York found an easy and reliable way to slam on the CAR-T brake. Rather than acting on CAR-T cells, the antidote severs downstream actions of the cells, leaving them in a dormant state that can be re-awakened.

The drug, called dasatinib, essentially puts CAR-T on a leash—one strong enough to stop deadly runaway immune reactions in their tracks. Currently approved for some types of leukemia, dasatinib is an old-school drug with over a decade of history and is intimately familiar to the oncology world.

“The evaluation and implementation of dasatinib as an on/off control drug in CAR-T cell immunotherapy should be feasible and straightforward,” the authors wrote. The results were published in Science Translational Medicine, and matched independent conclusions from another team.

A Dial for Killer Cells

Rather than focusing on the cells themselves, the team looked at what happens after CAR-T cells grab onto their target.

As immune cells, CAR-T soldiers already have protein “claws” embedded on their surface that recognize all sorts of invaders such as bacteria. CAR-Ts, however, are further armed with genetically engineered claws that more efficiently hunt down a particular type of tumor.

These claws are short-range weapons. The cells need to physically interact with their target by grabbing onto proteins dotted on the cancer cell’s surface with the claws. This “handshake” causes a cascade of biochemical reactions inside the CAR-T cells, which triggers them to release a hefty cloud of immune chemicals—dubbed cytokines—toxic to the tumor. The end result is rather grisly: the tumor “melts,” literally breaking apart into tiny bio-building-blocks that the body subsequently absorbs or expels.

From previous research, the team noticed that dasatinib quiets down one of the molecules involved in CAR-T’s the chain reaction after the handshake. So it makes sense that blocking the deadly game of telephone can halt CAR-T actions.

They first tested their idea in cultured tumor cells in petri dishes. Using a popular CAR-T recipe—one with high rates of complete remission in recent clinical trials—the team challenged the tumors with their engineered killers, either with or without dasatinib. Remarkably, treatment with the drug completely halted CAR-T’s ability to rip their targets apart. One direct dose worked for hours, and when given multiple doses, the drug could inhibit the cells’ activity for at least a week.

Encouraged, the team tried the drug on several other CAR-T recipes, which all trigger the same downstream reaction. The trick worked every time. It suggests that any CAR-T cells that use this “telephone” pathway can be controlled using dasatinib, the team concluded.

The drug was also tunable and reversible, two extremely powerful traits in pharmaceutics.

Tunable means the drug’s effect depends on dose: like turning a dial, the team can predictably control its inhibitory action by how much they add. And when CAR-Ts need to go back into full force, all the team has to do is sit and wait—literally—for the cell to metabolize the drug away. As soon as the levels drop, CAR-Ts spring back into action with no side effects.

More clinically relevant, the drug didn’t just work in isolated cells; it also worked in mice with tumors. With just two doses, the team was able to keep CAR-T therapy in check. Once they stopped the treatment, CAR-T cells sprung back, and the team again detected their chemical attacks on the tumors. Because CAR-T cells are expensive to engineer, that’s a huge perk. It means that they can linger inside the body awaiting orders, without a full withdrawal of the troops.

An Antidote for Immune Overreaction

Ask any oncologist, and “cytokine storm” is on top of their list for CAR-T dangers. Because these cells proliferate inside the body, they can in some conditions—specifics still unclear—dump a bucketful of toxic immune molecules into the body. This purging action then causes native immune cells to respond in kind, releasing their own cytokines.

“It’s a runaway response,” said Travis Young at the California Institute for Biomedical Research. “There’s no way to control if that patient will have a 100-, a 1,000-, or a 10,000-fold expansion of their CAR-T cells.”

The result is a tornado-scale immune reaction that destroys indiscriminately, tumor or not. In some patients, it’s a death sentence. Because berserk CAR-T cells form the root of the immune tornado, the team tested in mice whether dasatinib can neutralize the deadly side effect. Here, they used a mouse model previously shown to induce an extreme cytokine storm. While all of the tumor-laden mice received CAR-T, some also got a shot of dasatinib three hours later.

Without the antidote, 75 percent of CAR-T infused mice died within two days. With the drug, fatality dropped to 30 percent. It’s not zero, but it does mean that some patients may be saved.

Control Is King

Because dasatinib has been around for over a decade, there’s plenty of data on how human bodies handle the drug. The team believes that popping a pill every six hours—or at even longer intervals—should allow enough drug inside the body to control CAR-T in patients.

This level of control has so far been outside the grasp of oncologists, despite numerous previous ideas. One such suggested method is to build an off switch directly into the cells. Although effective, once activated it also destroys any tumor-killing ability. To continue the treatment, the patient would have to start from scratch.

“As a consequence, physicians and patients have been reluctant to use these safety switches, even when side effects…were severe,” the authors explained.

Another common treatment is steroids. When directly pitted against dasatinib, however, the team found that steroids are slow responders less effective at controlling CAR-T action. Steroids also increase the risk of infections, whereas dasatinib may actually work together with CAR-T to further enhance cancer-treatment efficacy.

To Michael Gilman, CEO of Obsidian Therapeutics based in Massachusetts, the future of CAR-T is bright, not only for blood cancers but also for solid tumors, so long as it’s under control.

“In order for that to happen, these therapies have to be tamed. They have to behave like pharmaceuticals where doses can be controlled and sensitively managed by everyday physicians,” Gilman said.

Image Credit: Meletios Verras / Shutterstock.com

Think.intellect

via Singularity Hub https://meson.in/2EASxAx

July 10, 2019 at 11:00PM

Can AI Save the Internet from Fake News?

Can AI Save the Internet from Fake News?

https://meson.in/2XgKUaW

There’s an old proverb that says “seeing is believing.” But in the age of artificial intelligence, it’s becoming increasingly difficult to take anything at face value—literally.

The rise of so-called “deepfakes,” in which different types of AI-based techniques are used to manipulate video content, has reached the point where Congress held its first hearing last month on the potential abuses of the technology. The congressional investigation coincided with the release of a doctored video of Facebook CEO Mark Zuckerberg delivering what appeared to be a sinister speech.

Scientists are scrambling for solutions on how to combat deepfakes, while at the same time others are continuing to refine the techniques for less nefarious purposes, such as automating video content for the film industry.

At one end of the spectrum, for example, researchers at New York University’s Tandon School of Engineering have proposed implanting a type of digital watermark using a neural network that can spot manipulated photos and videos.

The idea is to embed the system directly into a digital camera. Many smartphone cameras and other digital devices already use AI to boost image quality and make other corrections. The authors of the study out of NYU say their prototype platform increased the chances of detecting manipulation from about 45 percent to more than 90 percent without sacrificing image quality.

On the other hand, researchers at Carnegie Mellon University recently hit on a technique for automatically and rapidly converting large amounts of video content from one source into the style of another. In one example, the scientists transferred the facial expressions of comedian John Oliver onto the bespectacled face of late night show host Stephen Colbert.

The CMU team says the method could be a boon to the movie industry, such as by converting black and white films to color, though it also conceded that the technology could be used to develop deepfakes.

Words Matter with Fake News

While the current spotlight is on how to combat video and image manipulation, a prolonged trench warfare on fake news is being fought by academia, nonprofits, and the tech industry.

This isn’t the fake news that some have come to use as a knee-jerk reaction to fact-based information that might be less than flattering to the subject of the report. Rather, fake news is deliberately-created misinformation that is spread via the internet.

In a recent Pew Research Center poll, Americans said fake news is a bigger problem than violent crime, racism, and terrorism. Fortunately, many of the linguistic tools that have been applied to determine when people are being deliberately deceitful can be baked into algorithms for spotting fake news.

That’s the approach taken by a team at the University of Michigan (U-M) to develop an algorithm that was better than humans at identifying fake news—76 percent versus 70 percent—by focusing on linguistic cues like grammatical structure, word choice, and punctuation.

For example, fake news tends to be filled with hyperbole and exaggeration, using terms like “overwhelming” or “extraordinary.”

“I think that’s a way to make up for the fact that the news is not quite true, so trying to compensate with the language that’s being used,” Rada Mihalcea, a computer science and engineering professor at U-M, told Singularity Hub.

The paper “Automatic Detection of Fake News” was based on the team’s previous studies on how people lie in general, without necessarily having the intention of spreading fake news, she said.

“Deception is a complicated and complex phenomenon that requires brain power,” Mihalcea noted. “That often results in simpler language, where you have shorter sentences or shorter documents.”

AI Versus AI

While most fake news is still churned out by humans with identifiable patterns of lying, according to Mihalcea, other researchers are already anticipating how to detect misinformation manufactured by machines.

A group led by Yejin Choi, with the Allen Institute of Artificial Intelligence and the University of Washington in Seattle, is one such team. The researchers recently introduced the world to Grover, an AI platform that is particularly good at catching autonomously-generated fake news because it’s equally good at creating it.

“This is due to a finding that is perhaps counterintuitive: strong generators for neural fake news are themselves strong detectors of it,” wrote Rowan Zellers, a PhD student and team member, in a Medium blog post. “A generator of fake news will be most familiar with its own peculiarities, such as using overly common or predictable words, as well as the peculiarities of similar generators.”

The team found that the best current discriminators can classify neural fake news from real, human-created text with 73 percent accuracy. Grover clocks in with 92 percent accuracy based on a training set of 5,000 neural network-generated fake news samples. Zellers wrote that Grover got better at scale, identifying 97.5 percent of made-up machine mumbo jumbo when trained on 80,000 articles.

It performed almost as well against fake news created by a powerful new text-generation system called GPT-2 built by OpenAI, a nonprofit research lab founded by Elon Musk, classifying 96.1 percent of the machine-written articles.

OpenAI had so feared that the platform could be abused that it has only released limited versions of the software. The public can play with a scaled-down version posted by a machine learning engineer named Adam King, where the user types in a short prompt and GPT-2 bangs out a short story or poem based on the snippet of text.

No Silver AI Bullet

While real progress is being made against fake news, the challenges of using AI to detect and correct misinformation are abundant, according to Hugo Williams, outreach manager for Logically, a UK-based startup that is developing different detectors using elements of deep learning and natural language processing, among others. He explained that the Logically models analyze information based on a three-pronged approach.

  • Publisher metadata: Is the article from a known, reliable, and trustworthy publisher with a history of credible journalism?
  • Network behavior: Is the article proliferating through social platforms and networks in ways typically associated with misinformation?
  • Content: The AI scans articles for hundreds of known indicators typically found in misinformation.

“There is no single algorithm which is capable of doing this,” Williams wrote in an email to Singularity Hub. “Even when you have a collection of different algorithms which—when combined—can give you relatively decent indications of what is unreliable or outright false, there will always need to be a human layer in the pipeline.”

The company released a consumer app in India back in February just before that country’s election cycle that was a “great testing ground” to refine its technology for the next app release, which is scheduled in the UK later this year. Users can submit articles for further scrutiny by a real person.

“We see our technology not as replacing traditional verification work, but as a method of simplifying and streamlining a very manual process,” Williams said. “In doing so, we’re able to publish more fact checks at a far quicker pace than other organizations.”

“With heightened analysis and the addition of more contextual information around the stories that our users are reading, we are not telling our users what they should or should not believe, but encouraging critical thinking based upon reliable, credible, and verified content,” he added.

AI may never be able to detect fake news entirely on its own, but it can help us be smarter about what we read on the internet.

Image Credit: Dennis Lytyagin / Shutterstock.com

Think.intellect

via Singularity Hub https://meson.in/2EASxAx

June 30, 2019 at 11:01PM

Facebook Quietly Admitted Millions More Instagram Users’ Passwords Were at Risk

Facebook Quietly Admitted Millions More Instagram Users’ Passwords Were at Risk

https://meson.in/2PklN4P

(SAN FRANCISCO) — Millions more Instagram users were affected by a password security lapse than parent company Facebook acknowledged nearly four weeks ago.

The social media giant said in late March that it had inadvertently stored passwords in plain text, making it possible for its thousands of employees to search them. It said the passwords were stored on internal company servers, where no outsiders could access them.

Facebook said in a blog post Thursday that it now estimates that “millions” of Instagram users were affected by the lapse, instead of the “tens of thousands” it had originally reported. It had also said in March that the issue affected “hundreds of millions” of Facebook Lite users and millions of Facebook users. Facebook Lite is designed for people with older phones or slow internet connections.

Science.general

via Techland https://meson.in/2DLLW54

April 19, 2019 at 06:00AM