Tag Archives: Feedly

Announcing wrangler dev — the Edge on localhost

Announcing wrangler dev — the Edge on localhost

https://meson.in/32ExgDp

Announcing wrangler dev — the Edge on localhost

Cloudflare Workers — our serverless platform — allows developers around the world to run their applications from our network of 200 datacenters, as close as possible to their users.

A few weeks ago we announced a release candidate for wrangler dev — today, we’re excited to take wrangler dev, the world’s first edge-based development environment, to GA with the release of wrangler 1.11.

Think locally, develop globally

It was once assumed that to successfully run an application on the web, one had to go and acquire a server, set it up (in a data center that hopefully you had access to), and then maintain it on an ongoing basis. Luckily for most of us, that assumption was challenged with the emergence of the cloud. The cloud was always assumed to be centralized — large data centers in a single region (“us-east-1”), reserved for compute. The edge? That was for caching static content.

Again, assumptions are being challenged.

Cloudflare Workers is about moving compute from a centralized location to the edge. And it makes sense: if users are distributed all over the globe, why should all of them be routed to us-east-1, on the opposite side of the world, causing latency and degrading user experience?

But challenging one assumption caused others to come into view. One of the most obvious ones was: would a local development environment actually provide the best experience for someone looking to test their Worker code? Trying to fit the entire Cloudflare edge, with all its dependencies onto a developer’s machine didn’t seem to be the best approach. Especially given that the place the developer was going to run that code in production was mere milliseconds away from the computer they were running on.

When I was in college, getting started with programming, one of the biggest barriers to entry was installing all the dependencies required to run a single library. I would go as far as to say that the third, and often forgotten hardest problem in computer science is dependency management.

We’re not the first to try and unify development environments across machines — tools such as Docker aim to solve this exact problem by providing a prepackaged development environment.

Yet, packaging up the Workers runtime is not quite so simple.

Beyond the Workers runtime, there are many components that make up Cloudflare’s edge, including DNS resolution, the Cloudflare cache — all of those parts are what makes Cloudflare Workers so powerful. That means that without those components, a standalone runtime is insufficient to represent the behavior of Worker request handling. The reason to develop locally first is to have the opportunity to experiment without affecting production. Thus, having a local development environment that truly reflects production is a requirement.

wrangler dev

wrangler dev provides all the convenience of a local development environment, without the headache of trying to reproduce the reality of production locally — and then having to keep the two environments in sync.

By running at the edge, it provides a high fidelity, consistent experience for all developers, without sacrificing the speedy feedback loop of a local development environment.

Live reloading

Announcing wrangler dev — the Edge on localhost

As you update your code, wrangler dev will detect changes, and push the new version of your code to the edge.

console.log() at your fingertips

Announcing wrangler dev — the Edge on localhost

Previously to extract your console logs from the Workers runtime, you had to have the Workers Preview open in a browser window at all times. With wrangler dev, you can receive your own logs, directly to your terminal of choice.

Cache API, KV, and more!

Since wrangler dev runs on the edge, you can now easily test the state of a cache.put(), without having to deploy your Worker to production.

wrangler dev will spin up a new KV namespace for development, so you don’t have to worry about affecting your production data.

And if you’re looking to test out some of the features provided on request.cf that provide rich information about the request such as geo-location — they will all be provided from the Cloudflare data center.

Get started

wrangler dev is now available in the latest version of Wrangler, the official Cloudflare Workers CLI.

To get started, follow our installation instructions here.

What’s next?

wrangler dev is just our first foray into giving our developers more visibility and agility with their development process.

We recognize that we have a lot more work to do to meet our developers needs, including providing an easy testing framework for Workers, and allowing our customers to observe their Workers’ behavior in production.

Just as wrangler dev provides a quick feedback loop between our developers and their code, we love to have a tight feedback loop between our developers and our product. We love to hear what you’re building, how you’re building it, and how we can help you build it better.

Product.platform

via The Cloudflare Blog https://meson.in/2DaAAwa

August 26, 2020 at 08:04PM

How Argo Tunnel engineering uses Argo Tunnel

How Argo Tunnel engineering uses Argo Tunnel

https://meson.in/2EBuwP7

How Argo Tunnel engineering uses Argo Tunnel

Whether you are managing a fleet of machines or sharing a private site from your localhost, Argo Tunnel is here to help. On the Argo Tunnel team we help make origins accessible from the Internet in a secure and seamless manner. We also care deeply about productivity and developer experience for the team, so naturally we want to make sure we have a development environment that is reliable, easy to set up and fast to iterate on.

A brief history of our development environment (dev-stack)

Docker compose

When our development team was still small, we used a docker-compose file to orchestrate the services needed to develop Argo Tunnel. There was no native support for hot reload, so every time an engineer made a change, they had to restart their dev-stack.

We could hack around it to hot reload with docker-compose, but when that failed, we had to waste time debugging the internals of Docker. As the team grew, we realized we needed to invest in improving our dev stack.

At the same time Cloudflare was in the process of migrating from Marathon to kubernetes (k8s). We set out to find a tool that could detect changes in source code and automatically upgrade pods with new images.

Skaffold + Minikube

Initially Skaffold seemed to match the criteria. It watches for change in source code, builds new images and deploys applications onto any k8s. Following Skaffold’s tutorial, we picked minikube as the local k8s, but together they didn’t meet our expectations. Port forwarding wasn’t stable, we got frequent connections refused or timeout.

In addition, iteration time didn’t improve, because spinning up minikube takes a long time and it doesn’t use the host’s docker registry and so it can’t take advantage of caching. At this point we considered reverting back to using docker compose, but the k8s ecosystem is booming, so we did some more research.

Tilt + Docker for mac k8s

Eventually we found a great blog post from Tilt comparing different options for local k8s, and they seem to be solving the exact problem we are having. Tilt is a tool that makes local development on k8s easier. It detects changes in local sources and updates your deployment accordingly.

In addition, it supports live updates without having to rebuild containers, a process that used to take around 20 minutes. With live updates, we can copy the newest source into the container, run cargo build within the container, and restart the service without building a new image. Following Tilt’s blog post, we switched to Docker for Mac’s built-in k8s. Combining Tilt and Docker for Mac k8s, we finally have a development environment that meets our needs.

Rust services that could take 20 minutes to rebuild now take less than a minute.

Collaborating with a distributed team

We reached a much happier state with our dev-stack, but one problem remained: we needed a way to share it. As our teams became distributed with people in Austin, Lisbon and Seattle, we needed better ways to help each other.

One day, I was helping our newest member understand an error observed in cloudflared, Argo Tunnel’s command line interface (CLI) client. I knew the error could either originate from the backend service or a mock API gateway service, but I couldn’t tell for sure without looking at logs.

To get them, I had to ask our new teammate to manually send me the logs of the two services. By the time I discovered the source of the error, reviewed the deployment manifest, and determined the error was caused by a secret set as an empty string, two full hours had elapsed!

I could have solved this in minutes if I had remote access to her development environment. That’s exactly what Argo Tunnel can do! Argo Tunnel provides remote access to development environments by creating secure outbound-only connections to Cloudflare’s edge network from a resource exposing it to the Internet. That model helps protect servers and resources from being vulnerable to attack by an exposed IP address.

I can use Argo Tunnel to expose a remote dev environment, but the information stored is sensitive. Once exposed, we needed a way to prevent users from reaching it unless they are an authenticated member of my team. Cloudflare Access solves that challenge. Access sits in front of the hostname powered by Argo Tunnel and checks for identity on every request. I can combine both services to share the dev-stack details with the rest of the team in a secure deployment.

The built-in k8s dashboard gives a great overview of the dev-stack, with the list of pods, deployments, services, config maps, secrets, etc. It also allows us to inspect pod logs and exec into a container. By default, it is secured by a token that changes every time the service restarts. To avoid the hassle of distributing the service token to everyone on the team, we wrote a simple reverse proxy that injects the service token in the authorization header before forwarding requests to the dashboard service.

Then we run Argo Tunnel as a sidecar to this reverse proxy, so it is accessible from the Internet. Finally, to make sure no random person can see our dashboard, we put an Access policy that only allows team members to access the hostname.

The request flow is eyeball -> Access -> Argo Tunnel -> reverse proxy -> dashboard service

How Argo Tunnel engineering uses Argo Tunnel

Working example

Your team can use the same model to develop remotely. Here’s how to get started.

  1. Start a local k8s cluster. https://docs.tilt.dev/choosing_clusters.html offers great advice in choosing a local cluster based on your OS and experience with k8s
How Argo Tunnel engineering uses Argo Tunnel

2. Enable dashboard service:

How Argo Tunnel engineering uses Argo Tunnel

3. Create a reverse proxy that will inject the service token of the kubernetes-dashboard service account in the Authorization header before forwarding requests to kubernetes dashboard service

package main
 
import (
   "crypto/tls"
   "fmt"
   "net/http"
   "net/http/httputil"
   "net/url"
   "os"
)
 
func main() {
   config, err := loadConfigFromEnv()
   if err != nil {
       panic(err)
   }
   reverseProxy := httputil.NewSingleHostReverseProxy(config.proxyURL)
   // The default Director builds the request URL. We want our custom Director to add Authorization, in
   // addition to building the URL
   singleHostDirector := reverseProxy.Director
   reverseProxy.Director = func(r *http.Request) {
       singleHostDirector(r)
       r.Header.Add("Authorization", fmt.Sprintf("Bearer %s", config.token))
       fmt.Println("request header", r.Header)
       fmt.Println("request host", r.Host)
       fmt.Println("request ULR", r.URL)
   }
   reverseProxy.Transport = &http.Transport{
       TLSClientConfig: &tls.Config{
           InsecureSkipVerify: true,
       },
   }
   server := http.Server{
       Addr:    config.listenAddr,
       Handler: reverseProxy,
   }
   server.ListenAndServe()
}
 
type config struct {
   listenAddr string
   proxyURL   *url.URL
   token      string
}
 
func loadConfigFromEnv() (*config, error) {
   listenAddr, err := requireEnv("LISTEN_ADDRESS")
   if err != nil {
       return nil, err
   }
   proxyURLStr, err := requireEnv("DASHBOARD_PROXY_URL")
   if err != nil {
       return nil, err
   }
   proxyURL, err := url.Parse(proxyURLStr)
   if err != nil {
       return nil, err
   }
   token, err := requireEnv("DASHBOARD_TOKEN")
   if err != nil {
       return nil, err
   }
   return &config{
       listenAddr: listenAddr,
       proxyURL:   proxyURL,
       token:      token,
   }, nil
}
 
func requireEnv(key string) (string, error) {
   result := os.Getenv(key)
   if result == "" {
       return "", fmt.Errorf("%v not provided", key)
   }
   return result, nil
}

4. Create an Argo Tunnel sidecar to expose this reverse proxy

apiVersion: apps/v1
kind: Deployment
metadata:
 name: dashboard-auth-proxy
 namespace: kubernetes-dashboard
 labels:
   app: dashboard-auth-proxy
spec:
 replicas: 1
 selector:
   matchLabels:
     app: dashboard-auth-proxy
 template:
   metadata:
     labels:
       app: dashboard-auth-proxy
   spec:
     containers:
       - name: dashboard-tunnel
         # Image from https://hub.docker.com/r/cloudflare/cloudflared
         image: cloudflare/cloudflared:2020.8.0
         command: ["cloudflared", "tunnel"]
         ports:
           - containerPort: 5000
         env:
           - name: TUNNEL_URL
             value: "http://localhost:8000"
           - name: NO_AUTOUPDATE
             value: "true"
           - name: TUNNEL_METRICS
             value: "localhost:5000"
       # dashboard-proxy is a proxy that injects the dashboard token into Authorization header before forwarding
       # the request to dashboard_proxy service
       - name: dashboard-auth-proxy
         image: dashboard-auth-proxy
         ports:
           - containerPort: 8000
         env:
           - name: LISTEN_ADDRESS
             value: localhost:8000
           - name: DASHBOARD_PROXY_URL
             value: https://kubernetes-dashboard
           - name: DASHBOARD_TOKEN
             valueFrom:
               secretKeyRef:
                 name: ${TOKEN_NAME}
                 key: token

5. Find out the URL to access your dashboard from Tilt’s UI

How Argo Tunnel engineering uses Argo Tunnel

6. Share the URL with your collaborators so they can access your dashboard anywhere they are through the tunnel!

How Argo Tunnel engineering uses Argo Tunnel

You can find the source code for the example in https://github.com/cloudflare/argo-tunnel-examples/tree/master/sharing-k8s-dashboard

If this sounds like a team you want to be on, we are hiring!

Product.platform

via The Cloudflare Blog https://meson.in/2DaAAwa

August 27, 2020 at 08:03PM

Asynchronous HTMLRewriter for Cloudflare Workers

Asynchronous HTMLRewriter for Cloudflare Workers

https://meson.in/3jpRk3c

Asynchronous HTMLRewriter for Cloudflare Workers

Asynchronous HTMLRewriter for Cloudflare Workers

Last year, we launched HTMLRewriter for Cloudflare Workers, which enables developers to make streaming changes to HTML on the edge. Unlike a traditional DOM parser that loads the entire HTML document into memory, we developed a streaming parser written in Rust. Today, we’re announcing support for asynchronous handlers in HTMLRewriter. Now you can perform asynchronous tasks based on the content of the HTML document: from prefetching fonts and image assets to fetching user-specific content from a CMS.

How can I use HTMLRewriter?

We designed HTMLRewriter to have a jQuery-like experience. First, you define a handler, then you assign it to a CSS selector; Workers does the rest for you. You can look at our new and improved documentation to see our supported list of selectors, which now include nth-child selectors. The example below changes the alternative text for every second image in a document.

async function editHtml(request) {
  return new HTMLRewriter()
     .on("img:nth-child(2)", new ElementHandler())
     .transform(await fetch(request))
}

class ElementHandler {
   element(e) {
      e.setAttribute("alt", "A very interesting image")
   }
}

Since these changes are applied using streams, we maintain a low TTFB (time to first byte) and users never know the HTML was transformed. If you’re interested in how we’re able to accomplish this technically, you can read our blog post about HTML parsing.

What’s new with HTMLRewriter?

Now you can define an async handler which allows any code that uses await. This means you can make dynamic HTML injection, based on the contents of the document, without having prior knowledge of what it contains. This allows you to customize HTML based on a particular user, feature flag, or even an integration with a CMS.

class UserCustomizer {
   // Remember to add the `async` keyword to the handler method
   async element(e) {
      const user = await fetch(`https://my.api.com/user/${e.getAttribute("user-id")}/online`)
      if (user.ok) {
         // Add the user’s name to the element
         e.setAttribute("user-name", await user.text())
      } else {
         // Remove the element, since this user not online
         e.remove()
      }
   }
}

What can I build with HTMLRewriter?

To illustrate the flexibility of HTMLRewriter, I wrote an example that you can deploy on your own website. If you manage a website, you know that old links and images can expire with time. Here’s an excerpt from a years’ old post I wrote on the Cloudflare Blog:

Asynchronous HTMLRewriter for Cloudflare Workers

As you might see, that missing image is not the prettiest sight. However, we can easily fix this using async handlers in HTMLRewriter. Using a service like the Internet Archive API, we can check if an image no longer exists and rewrite the URL to use the latest archive. That means users don’t see an ugly placeholder and won’t even know the image was replaced.

async function fetchAndFixImages(request) {
   return new HTMLRewriter()
      .on("img", new ImageFixer())
      .transform(await fetch(request))
}

class ImageFixer {
   async element(e) {
    var url = e.getAttribute("src")
    var response = await fetch(url)
    if (!response.ok) {
       var archive = await fetch(`https://archive.org/wayback/available?url=${url}`)
       if (archive.ok) {
          var snapshot = await archive.json()
          e.setAttribute("src", snapshot.archived_snapshots.closest.url)
       } else {
          e.remove()
       }
    }
  }
}

Using the Workers Playground, you can view a working sample of the above code. A more complex example could even alert a service like Sentry when a missing image is detected. Using the previous missing image, now you can see the image is restored and users are none of the wiser.

Asynchronous HTMLRewriter for Cloudflare Workers

If you’re interested in deploying this to your own website, click on the button below:

Asynchronous HTMLRewriter for Cloudflare Workers

What else can I build with HTMLRewriter?

We’ve been blown away by developer projects using HTMLRewriter. Here are a few projects that caught our eye and are great examples of the power of Cloudflare Workers and HTMLRewriter:

If you’re interested in using HTMLRewriter, check out our documentation. Also be sure to share any creations you’ve made with @CloudflareDev, we love looking at the awesome projects you build.

Product.platform

via The Cloudflare Blog https://meson.in/2DaAAwa

August 28, 2020 at 08:01PM

Scientists Used Protein Switches to Turn T Cells Into Cancer-Fighting Guided Missiles

Scientists Used Protein Switches to Turn T Cells Into Cancer-Fighting Guided Missiles

https://meson.in/2EIfmY0

CAR t-cells for cancer therapy

One of the main challenges in curing cancer is that unlike foreign invaders, tumor cells are part of the body and so able to hide in plain sight. Now researchers have found a way to turn white blood cells into precision guided missiles that can sniff out these wolves in sheep’s clothing.

One of the biggest breakthroughs in treating cancer in recent years has been the emergence of CAR-T cell therapies, which recruit the body’s immune system to fight tumors rather than relying on radiotherapy or powerful chemotherapy drugs that can have severe side effects.

The approach relies on T cells, the hunter-killer white blood cells that seek out and destroy pathogens. Therapies involve drawing blood from the patient, separating their T cells, and then genetically engineering them to produce “chimeric antigen receptors” (CARs) that target specific proteins called antigens on the surface of cancer cells. They are then re-administered to the patient to track down and destroy cancer cells.

The only problem is that very few cancers have unique antigens. Unlike the pathogens the T cells are used to hunting, tumor cells are not that dissimilar to the body’s other cells and often share many antigens. That means there’s a risk of T cells targeting the wrong cells and causing serious damage to healthy tissue. As a result the only therapies approved by the FDA so far are focused on blood cancers that affect cells with idiosyncratic antigens.

Now though, researchers at the University of Washington have found a way to help T cells target a far broader range of cancers. They’ve developed a system of proteins that can carry out logic operations just like a computer, which helps them target specific combinations of antigens that are unique to certain cancers.

“T cells are extremely efficient killers, so the fact that we can limit their activity on cells with the wrong combination of antigens yet still rapidly eliminate cells with the correct combination is game-changing,” said Alexander Salter, one of the lead authors of the study published in Science.

Their technique relies on a series of synthetic proteins that can be customized to create a variety of switches. These can be combined to carry out the AND, OR, and NOT operations at the heart of digital computing, which makes it possible to create instructions that focus on unique combinations of antigens such as “target antigen 1 AND antigen 2 but NOT antigen 3.”

When the correct collection of antigens is present, the proteins combine to create a kind of molecular beacon that can guide CAR-T cells to the tumor cell. To demonstrate the effectiveness of the approach, they showed how it helped CAR T cells pick out and destroy specific tumor cells in a mixture of several different cell types.

Most other approaches for helping target T cells are either only able to do basic AND operations to combine two antigens, or rely on engineering the targeting into the T cells themselves, which is far more complicated. There are still significant barriers to overcome, though.

For a start, the bespoke nature of CAR T-cell therapy means it can be extremely expensive—as high as $1.5mso access to this technology should it make it to the clinic will be limited. So far the researchers have only studied the proteins’ behavior in vitro, so it’s unclear how the body’s immune system would respond to them if they were injected into a human.

There are also other barriers to treating solid tumors using T cells beyond simply targeting the correct cells. T cells struggle to get inside large masses of cancer cells, and even if they do, these tumors often produce proteins that inhibit the effectiveness of the T cells.

This new protein logic system is still a major breakthrough in the fight against cancer, though. And the researchers point out the technique could be used to target all kinds of different biomedical processes, including gene therapies where you need to deliver DNA to a specific kind of cell. The potential applications of this new missile guidance system for cells are only just starting to be explored.

Image Credit: National Institutes of Health/Alex Ritter, Jennifer Lippincott Schwartz, Gillian Griffiths

Think.intellect

via Singularity Hub https://meson.in/2EASxAx

August 24, 2020 at 11:00PM

Cloud Spanner Emulator Reaches 1.0 Milestone!

Cloud Spanner Emulator Reaches 1.0 Milestone!

https://meson.in/2YdPSsA

The Cloud Spanner emulator provides application developers with the full set of APIs, including the full breadth of SQL and DDL features that can be run locally for prototyping, development and testing. This offline emulator is free and improves developer productivity for customers. Today, we are happy to announce that Cloud Spanner emulator is generally available (GA) with support for Partitioned APIs, Cloud Spanner client libraries, and SQL features.

Since Cloud Spanner emulator’s beta launch in April, 2020, we have seen strong adoption of the local emulator from customers of Cloud Spanner. Several new and existing customers adopted the emulator in their development & continuous test pipelines. They noticed significant improvements in developer productivity, speed of test execution, and error-free applications deployed to production. We also added several features in this release based on the valuable feedback we received from beta users. The full list of features is documented in the GitHub readme.

Partition APIs

When reading or querying large amounts of data from Cloud Spanner, it can be useful to divide the query into smaller pieces, or partitions, and use multiple machines to fetch the partitions in parallel. The emulator now supports Partition Read, Partition Query, and Partition DML APIs.

Cloud Spanner client libraries

With the GA launch, the latest versions of all the Cloud Spanner client libraries support the emulator. We have added support for C#, Node.js, PHP, Python, Ruby client libraries and the Cloud Spanner JDBC driver. This is in addition to C++, Go and Java client libraries that were already supported with the beta launch. Be sure to check out the minimum version for each of the client libraries that support the emulator.

Use the Getting Started guides to try the emulator with the client library of your choice.

SQL features

Emulator now supports the full set of SQL features provided by Cloud Spanner. Some of the notable additions being support for SQL functions JSON_VALUE, JSON_QUERY, CEILING, POWER, CHARACTER_LENGTH, and FORMAT. We now also support untyped parameter bindings in SQL statements which are used by our client libraries written in languages with dynamic typing e.g., Python, PHP, Node.js and Ruby.

Using Emulator in CI/CD pipelines

You may now point the majority of your existing CI/CD to the Cloud Spanner emulator instead of a real Cloud Spanner instance brought up on GCP. This will save you both cost and time, since an emulator instance comes up instantly and is free to use!

What’s even better is that you can bring up multiple instances in a single execution of the emulator, and of course multiple databases. Thus, tests that interact with a Cloud Spanner database can now run in parallel since each of them can have their own database, making tests hermetic. This can reduce flakiness in unit tests and reduce the number of bugs that can make their way to continuous integration tests or to production.

In case your existing CI/CD architecture assumes the existence of a Cloud Spanner test instance and/or test database against which the tests run, you can achieve similar functionality with the emulator as well. Note that the emulator doesn’t come up with a default instance or a default database as we expect users to create instances and databases as required in their tests for hermeticity as explained above. Below are two examples of how you can bring up an emulator with a default instance or database: 1) By using a docker image or 2) Programmatically.

Starting Emulator from Docker

The emulator can be started using Docker on Linux, MacOS, and Windows. As a prerequisite, you would need to install Docker on your system. To bring up an emulator with a default database/instance, you can execute a shell script in your docker file to do so. Such a script would make RPC calls to CreateInstance and CreateDatabase after bringing up the emulator server. You can also look at this example on how to put this together when using docker.
Run Emulator Programmatically

You can bring up the emulator binary in the same process as your test program. Then you can then create a default instance/database in your ‘Setup’ and clean up the same when the tests are over. Note that the exact procedure for bringing up an ‘in-process’ service may vary with the client library language and platform of your choice.

Other alternatives to start the emulator, including pre-built linux binaries, are listed here.
Try it now

Learn more about Google Cloud Spanner emulator and try it out now.

By Asheesh Agrawal, Google Open Source

Product.google

via Google Open Source Blog https://meson.in/3aGgffE

August 20, 2020 at 01:00AM

New P2P botnet infects SSH servers all over the world

New P2P botnet infects SSH servers all over the world

https://meson.in/3l1dtXa

Cartoon image of a desktop computer under attack from viruses.

Enlarge (credit: Aurich Lawson)

Researchers have found what they believe is a previously undiscovered botnet that uses unusually advanced measures to covertly target millions of servers around the world.

The botnet uses proprietary software written from scratch to infect servers and corral them into a peer-to-peer network, researchers from security firm Guardicore Labs reported on Wednesday. P2P botnets distribute their administration among many infected nodes rather than relying on a control server to send commands and receive pilfered data. With no centralized server, the botnets are generally harder to spot and more difficult to shut down.

“What was intriguing about this campaign was that, at first sight, there was no apparent command and control (CNC) server being connected to,” Guardicore Labs researcher Ophir Harpaz wrote. “It was shortly after the beginning of the research when we understood no CNC existed in the first place.”

Read 9 remaining paragraphs | Comments

Science.computing

via Biz & IT – Ars Technica https://meson.in/2GxN5ji

August 19, 2020 at 10:22PM

COVID-19 Could Threaten Firefighters As Wildfire Season Ramps Up

COVID-19 Could Threaten Firefighters As Wildfire Season Ramps Up

https://meson.in/326EWOj

Jon Paul was leery entering his first wildfire camp of the year late last month to fight three lightning-caused fires scorching parts of a Northern California forest that hadn’t burned in 40 years.

The 54-year-old engine captain from southern Oregon knew from experience that these crowded, grimy camps can be breeding grounds for norovirus and a respiratory illness that firefighters call the “camp crud” in a normal year. He wondered what COVID-19 would do in the tent cities where hundreds of men and women eat, sleep, wash and spend their downtime between shifts.

Paul thought about his immunocompromised wife and his 84-year-old mother back home. Then he joined the approximately 1,300 people spread across the Modoc National Forest who would provide a major test for the COVID-prevention measures that had been developed for wildland firefighters.

“We’re still first responders and we have that responsibility to go and deal with these emergencies,” he says. “I don’t scare easy, but I’m very wary and concerned about my surroundings. I’m still going to work and do my job.”

Paul is one of thousands of firefighters from across the U.S. battling dozens of wildfires burning throughout the West. It’s an inherently dangerous job that now carries the additional risk of COVID-19 transmission. Any outbreak that ripples through a camp could easily sideline crews and spread the virus across multiple fires—and back to communities across the country—as personnel transfer in and out of “hot zones” and return home.

Though most firefighters are young and fit, some will inevitably fall ill in these remote makeshift communities of shared showers and portable toilets, where medical care can be limited. The pollutants in the smoke they breathe daily also make them more susceptible to COVID-19 and can worsen the effects of the disease, according to the U.S. Centers for Disease Control and Prevention.

Also, a single suspected or positive case in a camp will mean many other firefighters will need to be quarantined, unable to work. The worst-case scenario is that multiple outbreaks could hamstring the nation’s ability to respond as wildfire season peaks in August, the hottest month and driest month of the year in the Western U.S.

The number of acres burned so far this year is below the 10-year average, but the fire outlook for August is above average in nine states, according to the National Interagency Fire Center. Twenty-two large fires ignited on Aug. 17 alone after lightning storms passed through the Northwest, and two days later, California declared a state of emergency due to uncontrolled wildfires.

A study published this month by researchers at Colorado State University and the U.S. Forest Service’s Rocky Mountain Research Station concluded that COVID-19 outbreaks “could be a serious threat to the firefighting mission” and urged vigilant social distancing and screening measures in the camps.

“If simultaneous fires incurred outbreaks, the entire wildland response system could be stressed substantially, with a large portion of the workforce quarantined,” the study’s authors wrote.

U.S. Forest Service
U.S. Forest ServiceFirefighters wear face masks at a morning briefing on the Bighorn Fire, north of Tucson, Ariz., on June 22, 2020.

This spring, the National Wildfire Coordinating Group’s Fire Management Board wrote—and has since been updating—protocols to prevent the spread of COVID-19 in fire camps, based on CDC guidelines:

  • Firefighters should be screened for fever and other symptoms when they arrive at camp.
  • Every crew should insulate itself as a “module of one” for the fire season and limit interactions with other crews.
  • Firefighters should maintain social distancing and wear face coverings when social distancing isn’t possible. Smaller satellite camps, known as “spike” camps, can be built to ensure enough space.
  • Shared areas should be regularly cleaned and disinfected, and sharing tools and radios should be minimized.

The guidelines do not include routine testing of newly arrived firefighters—a practice used for athletes at training camps and students returning to college campuses. The Fire Management Board’s Wildland Fire Medical and Public Health Advisory Team wrote in a July 2 memo that it “does not recommend utilizing universal COVID-19 laboratory testing as a standalone risk mitigation or screening measure among wildland firefighters.” Rather, the group recommends testing an individual and directly exposed co-workers, saying that approach is in line with CDC guidance.

The lack of testing capacity and long turnaround times are factors, according to Forest Service spokesperson Dan Hottle. (The exception is Alaska, where firefighters are tested upon arrival at the airport and are quarantined in a hotel while awaiting results, which come in 24 hours, Hottle says.)

Fire crews responding to early season fires in the spring had some problems adjusting to the new protocols, according to assessments written by fire leaders and compiled by the Wildland Fire Lessons Learned Center. Shawn Faiella, superintendent of the interagency “hotshot crew” – so named because they work the most challenging, or “hottest” parts of wildfires — based at Montana’s Lolo National Forest, questioned the need to wear masks inside vehicles and the safety of bringing extra vehicles to space out firefighters traveling to a blaze. Parking extra vehicles at the scene of a fire is difficult in tight forest dirt roads—and would be dangerous if evacuations are necessary, he wrote.

“It’s damn tough to take these practices to the fire line,” Faiella wrote after his team responded to a 40-acre Montana fire in April.

One recommendation that fire managers say has been particularly effective is the “module of one” concept requiring crews to eat and sleep together in isolation for the entire fire season. “Whoever came up with it, it is working,” says Mike Goicoechea, the Montana-based incident commander for the Forest Service’s Northern Region Type 1 team, which manages the nation’s largest and most complex wildfires and natural disasters. “Somebody may test positive, and you end up having to take that module out of service for 14 days. But the nice part is you’re not taking out a whole camp.… It’s just that module.”

There is no single system that is tracking the total number of positive COVID-19 cases among wildland firefighters among the various federal, state, local and tribal agencies. Each fire agency has its own method, says Jessica Gardetto, a spokesperson for the Bureau of Land Management and the National Interagency Fire Center in Idaho.

The largest wildland firefighting agency in the U.S. is the Agriculture Department’s Forest Service, with 10,000 firefighters. Another major agency is the Department of the Interior, which had more than 3,500 full-time fire employees last year. As of the first week of August, 111 Forest Service firefighters and 40 BLM firefighters (who work underneath the broader Interior Department agency) had tested positive for COVID-19, according to officials for the respective agencies. “Considering we’ve now been experiencing fire activity for several months, this number is surprisingly low if you think about the thousands of fire personnel who’ve been suppressing wildfires this summer,” Gardetto says.

Goicoechea and his Montana team traveled north of Tucson, Arizona, on June 22 to manage a rapidly spreading fire in the Santa Catalina Mountains that required 1,200 responders at its peak. Within two days of the team’s arrival, his managers were overwhelmed by calls from firefighters worried or with questions about preventing the spread of COVID-19 or carrying the virus home to their families.

In an unusual move, Goicoechea called upon a Montana physician—and former National Park Service ranger with wildfire experience—Dr. Harry Sibold to join the team. Physicians are rarely, if ever, part of a wildfire camp’s medical team, Goicoechea says. Sibold gave regular coronavirus updates during morning briefings, consulted with local health officials, soothed firefighters worried about bringing the virus home to their families and advised fire managers on how to handle scenarios that might come up.

But Sibold says he wasn’t optimistic at the beginning about keeping the coronavirus in check in a large camp in Pima County, which has the second-highest number of confirmed cases in Arizona, at the time a national COVID-19 hot spot. “I quite firmly expected that we might have two or three outbreaks,” he says.

There were no positive cases during the team’s two-week deployment, just three or four cases where a firefighter showed symptoms but tested negative for the virus. After the Montana team returned home, nine firefighters at the Arizona fire from other units tested positive, Goicoechea says. Contact tracers notified the Montana team, some of whom were tested. All tests returned negative.

“I can’t say enough about having that doctor to help,” Goicoechea says, suggesting other teams might consider doing the same. “We’re not the experts in a pandemic. We’re the experts with fire.”

That early success will be tested as the number of fires increase across the West, along with the number of firefighters responding to them. There were more than 15,000 firefighters and support personnel assigned to fires across the nation as of mid-August, and the success of those COVID-19 prevention protocols depend largely upon them.

Paul, the Oregon firefighter, says that the guidelines were followed closely in camp, but less so out on the fire line. It also appeared to him that younger firefighters were less likely to follow the masking and social-distancing rules than the veterans like him. That worries him it wouldn’t take much to spark an outbreak that could sideline crews and cripple the ability to respond to a fire. “We’re outside, so it definitely helps with mitigation and makes it simpler to social distance,” Paul says. “But I think if there’s a mistake made, it could happen.”


KHN (Kaiser Health News) is a nonprofit news service covering health issues. It is an editorially independent program of KFF (Kaiser Family Foundation) that is not affiliated with Kaiser Permanente.

Science.general

via TIME.com: Top Science and Health Stories https://meson.in/2U5ujaJ

August 20, 2020 at 12:57AM

Researchers discover novel molecular mechanism that enables conifers to adapt to winter

Researchers discover novel molecular mechanism that enables conifers to adapt to winter

https://meson.in/3kZzqWx

Unlike broadleaf trees, conifers are evergreen and retain their photosynthesis structure throughout the year. Especially in late winter, the combination of freezing temperatures and high light intensity exposes the needles to oxidative damage that could lead to the destruction of molecules and cell structures that contribute to photosynthesis. Researchers have discovered a previously unknown mechanism that enables spruce trees to adapt to winter.

Bio.technology

via ScienceDaily: Biotechnology News https://meson.in/2CjfWYX

August 20, 2020 at 01:39AM

Mounting poisonings, blindness, deaths as toxic hand sanitizers flood market

Mounting poisonings, blindness, deaths as toxic hand sanitizers flood market

https://meson.in/2CYbpOn

A gloved hand dispenses goo into an open bare hand.

Enlarge / Hand sanitizer being applied to a person’s hand. (credit: Getty | Leopoldo Smith)

The Food and Drug Administration is renewing warnings this week of dangerous hand sanitizers as it continues to find products that contain toxic methanol—a poisonous alcohol that can cause systemic effects, blindness, and death.

The agency’s growing “do-not-use list” of dangerous sanitizers now includes 87 products. And with the mounting tally, the FDA also says there are rising reports from state health departments and poison control centers of injuries and deaths.

“We remain extremely concerned about the potential serious risks of alcohol-based hand sanitizers containing methanol,” said FDA Commissioner Stephen M. Hahn in a statement.

Read 5 remaining paragraphs | Comments

Science.general

via Science – Ars Technica https://meson.in/2GxN5ji

July 29, 2020 at 07:45AM