How Argo Tunnel engineering uses Argo Tunnel

How Argo Tunnel engineering uses Argo Tunnel

How Argo Tunnel engineering uses Argo Tunnel

Whether you are managing a fleet of machines or sharing a private site from your localhost, Argo Tunnel is here to help. On the Argo Tunnel team we help make origins accessible from the Internet in a secure and seamless manner. We also care deeply about productivity and developer experience for the team, so naturally we want to make sure we have a development environment that is reliable, easy to set up and fast to iterate on.

A brief history of our development environment (dev-stack)

Docker compose

When our development team was still small, we used a docker-compose file to orchestrate the services needed to develop Argo Tunnel. There was no native support for hot reload, so every time an engineer made a change, they had to restart their dev-stack.

We could hack around it to hot reload with docker-compose, but when that failed, we had to waste time debugging the internals of Docker. As the team grew, we realized we needed to invest in improving our dev stack.

At the same time Cloudflare was in the process of migrating from Marathon to kubernetes (k8s). We set out to find a tool that could detect changes in source code and automatically upgrade pods with new images.

Skaffold + Minikube

Initially Skaffold seemed to match the criteria. It watches for change in source code, builds new images and deploys applications onto any k8s. Following Skaffold’s tutorial, we picked minikube as the local k8s, but together they didn’t meet our expectations. Port forwarding wasn’t stable, we got frequent connections refused or timeout.

In addition, iteration time didn’t improve, because spinning up minikube takes a long time and it doesn’t use the host’s docker registry and so it can’t take advantage of caching. At this point we considered reverting back to using docker compose, but the k8s ecosystem is booming, so we did some more research.

Tilt + Docker for mac k8s

Eventually we found a great blog post from Tilt comparing different options for local k8s, and they seem to be solving the exact problem we are having. Tilt is a tool that makes local development on k8s easier. It detects changes in local sources and updates your deployment accordingly.

In addition, it supports live updates without having to rebuild containers, a process that used to take around 20 minutes. With live updates, we can copy the newest source into the container, run cargo build within the container, and restart the service without building a new image. Following Tilt’s blog post, we switched to Docker for Mac’s built-in k8s. Combining Tilt and Docker for Mac k8s, we finally have a development environment that meets our needs.

Rust services that could take 20 minutes to rebuild now take less than a minute.

Collaborating with a distributed team

We reached a much happier state with our dev-stack, but one problem remained: we needed a way to share it. As our teams became distributed with people in Austin, Lisbon and Seattle, we needed better ways to help each other.

One day, I was helping our newest member understand an error observed in cloudflared, Argo Tunnel’s command line interface (CLI) client. I knew the error could either originate from the backend service or a mock API gateway service, but I couldn’t tell for sure without looking at logs.

To get them, I had to ask our new teammate to manually send me the logs of the two services. By the time I discovered the source of the error, reviewed the deployment manifest, and determined the error was caused by a secret set as an empty string, two full hours had elapsed!

I could have solved this in minutes if I had remote access to her development environment. That’s exactly what Argo Tunnel can do! Argo Tunnel provides remote access to development environments by creating secure outbound-only connections to Cloudflare’s edge network from a resource exposing it to the Internet. That model helps protect servers and resources from being vulnerable to attack by an exposed IP address.

I can use Argo Tunnel to expose a remote dev environment, but the information stored is sensitive. Once exposed, we needed a way to prevent users from reaching it unless they are an authenticated member of my team. Cloudflare Access solves that challenge. Access sits in front of the hostname powered by Argo Tunnel and checks for identity on every request. I can combine both services to share the dev-stack details with the rest of the team in a secure deployment.

The built-in k8s dashboard gives a great overview of the dev-stack, with the list of pods, deployments, services, config maps, secrets, etc. It also allows us to inspect pod logs and exec into a container. By default, it is secured by a token that changes every time the service restarts. To avoid the hassle of distributing the service token to everyone on the team, we wrote a simple reverse proxy that injects the service token in the authorization header before forwarding requests to the dashboard service.

Then we run Argo Tunnel as a sidecar to this reverse proxy, so it is accessible from the Internet. Finally, to make sure no random person can see our dashboard, we put an Access policy that only allows team members to access the hostname.

The request flow is eyeball -> Access -> Argo Tunnel -> reverse proxy -> dashboard service

How Argo Tunnel engineering uses Argo Tunnel

Working example

Your team can use the same model to develop remotely. Here’s how to get started.

  1. Start a local k8s cluster. offers great advice in choosing a local cluster based on your OS and experience with k8s
How Argo Tunnel engineering uses Argo Tunnel

2. Enable dashboard service:

How Argo Tunnel engineering uses Argo Tunnel

3. Create a reverse proxy that will inject the service token of the kubernetes-dashboard service account in the Authorization header before forwarding requests to kubernetes dashboard service

package main
import (
func main() {
   config, err := loadConfigFromEnv()
   if err != nil {
   reverseProxy := httputil.NewSingleHostReverseProxy(config.proxyURL)
   // The default Director builds the request URL. We want our custom Director to add Authorization, in
   // addition to building the URL
   singleHostDirector := reverseProxy.Director
   reverseProxy.Director = func(r *http.Request) {
       r.Header.Add("Authorization", fmt.Sprintf("Bearer %s", config.token))
       fmt.Println("request header", r.Header)
       fmt.Println("request host", r.Host)
       fmt.Println("request ULR", r.URL)
   reverseProxy.Transport = &http.Transport{
       TLSClientConfig: &tls.Config{
           InsecureSkipVerify: true,
   server := http.Server{
       Addr:    config.listenAddr,
       Handler: reverseProxy,
type config struct {
   listenAddr string
   proxyURL   *url.URL
   token      string
func loadConfigFromEnv() (*config, error) {
   listenAddr, err := requireEnv("LISTEN_ADDRESS")
   if err != nil {
       return nil, err
   proxyURLStr, err := requireEnv("DASHBOARD_PROXY_URL")
   if err != nil {
       return nil, err
   proxyURL, err := url.Parse(proxyURLStr)
   if err != nil {
       return nil, err
   token, err := requireEnv("DASHBOARD_TOKEN")
   if err != nil {
       return nil, err
   return &config{
       listenAddr: listenAddr,
       proxyURL:   proxyURL,
       token:      token,
   }, nil
func requireEnv(key string) (string, error) {
   result := os.Getenv(key)
   if result == "" {
       return "", fmt.Errorf("%v not provided", key)
   return result, nil

4. Create an Argo Tunnel sidecar to expose this reverse proxy

apiVersion: apps/v1
kind: Deployment
 name: dashboard-auth-proxy
 namespace: kubernetes-dashboard
   app: dashboard-auth-proxy
 replicas: 1
     app: dashboard-auth-proxy
       app: dashboard-auth-proxy
       - name: dashboard-tunnel
         # Image from
         image: cloudflare/cloudflared:2020.8.0
         command: ["cloudflared", "tunnel"]
           - containerPort: 5000
           - name: TUNNEL_URL
             value: "http://localhost:8000"
           - name: NO_AUTOUPDATE
             value: "true"
           - name: TUNNEL_METRICS
             value: "localhost:5000"
       # dashboard-proxy is a proxy that injects the dashboard token into Authorization header before forwarding
       # the request to dashboard_proxy service
       - name: dashboard-auth-proxy
         image: dashboard-auth-proxy
           - containerPort: 8000
           - name: LISTEN_ADDRESS
             value: localhost:8000
           - name: DASHBOARD_PROXY_URL
             value: https://kubernetes-dashboard
           - name: DASHBOARD_TOKEN
                 name: ${TOKEN_NAME}
                 key: token

5. Find out the URL to access your dashboard from Tilt’s UI

How Argo Tunnel engineering uses Argo Tunnel

6. Share the URL with your collaborators so they can access your dashboard anywhere they are through the tunnel!

How Argo Tunnel engineering uses Argo Tunnel

You can find the source code for the example in

If this sounds like a team you want to be on, we are hiring!


via The Cloudflare Blog

August 27, 2020 at 08:03PM

Asynchronous HTMLRewriter for Cloudflare Workers

Asynchronous HTMLRewriter for Cloudflare Workers

Asynchronous HTMLRewriter for Cloudflare Workers

Asynchronous HTMLRewriter for Cloudflare Workers

Last year, we launched HTMLRewriter for Cloudflare Workers, which enables developers to make streaming changes to HTML on the edge. Unlike a traditional DOM parser that loads the entire HTML document into memory, we developed a streaming parser written in Rust. Today, we’re announcing support for asynchronous handlers in HTMLRewriter. Now you can perform asynchronous tasks based on the content of the HTML document: from prefetching fonts and image assets to fetching user-specific content from a CMS.

How can I use HTMLRewriter?

We designed HTMLRewriter to have a jQuery-like experience. First, you define a handler, then you assign it to a CSS selector; Workers does the rest for you. You can look at our new and improved documentation to see our supported list of selectors, which now include nth-child selectors. The example below changes the alternative text for every second image in a document.

async function editHtml(request) {
  return new HTMLRewriter()
     .on("img:nth-child(2)", new ElementHandler())
     .transform(await fetch(request))

class ElementHandler {
   element(e) {
      e.setAttribute("alt", "A very interesting image")

Since these changes are applied using streams, we maintain a low TTFB (time to first byte) and users never know the HTML was transformed. If you’re interested in how we’re able to accomplish this technically, you can read our blog post about HTML parsing.

What’s new with HTMLRewriter?

Now you can define an async handler which allows any code that uses await. This means you can make dynamic HTML injection, based on the contents of the document, without having prior knowledge of what it contains. This allows you to customize HTML based on a particular user, feature flag, or even an integration with a CMS.

class UserCustomizer {
   // Remember to add the `async` keyword to the handler method
   async element(e) {
      const user = await fetch(`${e.getAttribute("user-id")}/online`)
      if (user.ok) {
         // Add the user’s name to the element
         e.setAttribute("user-name", await user.text())
      } else {
         // Remove the element, since this user not online

What can I build with HTMLRewriter?

To illustrate the flexibility of HTMLRewriter, I wrote an example that you can deploy on your own website. If you manage a website, you know that old links and images can expire with time. Here’s an excerpt from a years’ old post I wrote on the Cloudflare Blog:

Asynchronous HTMLRewriter for Cloudflare Workers

As you might see, that missing image is not the prettiest sight. However, we can easily fix this using async handlers in HTMLRewriter. Using a service like the Internet Archive API, we can check if an image no longer exists and rewrite the URL to use the latest archive. That means users don’t see an ugly placeholder and won’t even know the image was replaced.

async function fetchAndFixImages(request) {
   return new HTMLRewriter()
      .on("img", new ImageFixer())
      .transform(await fetch(request))

class ImageFixer {
   async element(e) {
    var url = e.getAttribute("src")
    var response = await fetch(url)
    if (!response.ok) {
       var archive = await fetch(`${url}`)
       if (archive.ok) {
          var snapshot = await archive.json()
          e.setAttribute("src", snapshot.archived_snapshots.closest.url)
       } else {

Using the Workers Playground, you can view a working sample of the above code. A more complex example could even alert a service like Sentry when a missing image is detected. Using the previous missing image, now you can see the image is restored and users are none of the wiser.

Asynchronous HTMLRewriter for Cloudflare Workers

If you’re interested in deploying this to your own website, click on the button below:

Asynchronous HTMLRewriter for Cloudflare Workers

What else can I build with HTMLRewriter?

We’ve been blown away by developer projects using HTMLRewriter. Here are a few projects that caught our eye and are great examples of the power of Cloudflare Workers and HTMLRewriter:

If you’re interested in using HTMLRewriter, check out our documentation. Also be sure to share any creations you’ve made with @CloudflareDev, we love looking at the awesome projects you build.


via The Cloudflare Blog

August 28, 2020 at 08:01PM

Scientists Used Protein Switches to Turn T Cells Into Cancer-Fighting Guided Missiles

Scientists Used Protein Switches to Turn T Cells Into Cancer-Fighting Guided Missiles

CAR t-cells for cancer therapy

One of the main challenges in curing cancer is that unlike foreign invaders, tumor cells are part of the body and so able to hide in plain sight. Now researchers have found a way to turn white blood cells into precision guided missiles that can sniff out these wolves in sheep’s clothing.

One of the biggest breakthroughs in treating cancer in recent years has been the emergence of CAR-T cell therapies, which recruit the body’s immune system to fight tumors rather than relying on radiotherapy or powerful chemotherapy drugs that can have severe side effects.

The approach relies on T cells, the hunter-killer white blood cells that seek out and destroy pathogens. Therapies involve drawing blood from the patient, separating their T cells, and then genetically engineering them to produce “chimeric antigen receptors” (CARs) that target specific proteins called antigens on the surface of cancer cells. They are then re-administered to the patient to track down and destroy cancer cells.

The only problem is that very few cancers have unique antigens. Unlike the pathogens the T cells are used to hunting, tumor cells are not that dissimilar to the body’s other cells and often share many antigens. That means there’s a risk of T cells targeting the wrong cells and causing serious damage to healthy tissue. As a result the only therapies approved by the FDA so far are focused on blood cancers that affect cells with idiosyncratic antigens.

Now though, researchers at the University of Washington have found a way to help T cells target a far broader range of cancers. They’ve developed a system of proteins that can carry out logic operations just like a computer, which helps them target specific combinations of antigens that are unique to certain cancers.

“T cells are extremely efficient killers, so the fact that we can limit their activity on cells with the wrong combination of antigens yet still rapidly eliminate cells with the correct combination is game-changing,” said Alexander Salter, one of the lead authors of the study published in Science.

Their technique relies on a series of synthetic proteins that can be customized to create a variety of switches. These can be combined to carry out the AND, OR, and NOT operations at the heart of digital computing, which makes it possible to create instructions that focus on unique combinations of antigens such as “target antigen 1 AND antigen 2 but NOT antigen 3.”

When the correct collection of antigens is present, the proteins combine to create a kind of molecular beacon that can guide CAR-T cells to the tumor cell. To demonstrate the effectiveness of the approach, they showed how it helped CAR T cells pick out and destroy specific tumor cells in a mixture of several different cell types.

Most other approaches for helping target T cells are either only able to do basic AND operations to combine two antigens, or rely on engineering the targeting into the T cells themselves, which is far more complicated. There are still significant barriers to overcome, though.

For a start, the bespoke nature of CAR T-cell therapy means it can be extremely expensive—as high as $1.5mso access to this technology should it make it to the clinic will be limited. So far the researchers have only studied the proteins’ behavior in vitro, so it’s unclear how the body’s immune system would respond to them if they were injected into a human.

There are also other barriers to treating solid tumors using T cells beyond simply targeting the correct cells. T cells struggle to get inside large masses of cancer cells, and even if they do, these tumors often produce proteins that inhibit the effectiveness of the T cells.

This new protein logic system is still a major breakthrough in the fight against cancer, though. And the researchers point out the technique could be used to target all kinds of different biomedical processes, including gene therapies where you need to deliver DNA to a specific kind of cell. The potential applications of this new missile guidance system for cells are only just starting to be explored.

Image Credit: National Institutes of Health/Alex Ritter, Jennifer Lippincott Schwartz, Gillian Griffiths


via Singularity Hub

August 24, 2020 at 11:00PM

Cloud Spanner Emulator Reaches 1.0 Milestone!

Cloud Spanner Emulator Reaches 1.0 Milestone!

The Cloud Spanner emulator provides application developers with the full set of APIs, including the full breadth of SQL and DDL features that can be run locally for prototyping, development and testing. This offline emulator is free and improves developer productivity for customers. Today, we are happy to announce that Cloud Spanner emulator is generally available (GA) with support for Partitioned APIs, Cloud Spanner client libraries, and SQL features.

Since Cloud Spanner emulator’s beta launch in April, 2020, we have seen strong adoption of the local emulator from customers of Cloud Spanner. Several new and existing customers adopted the emulator in their development & continuous test pipelines. They noticed significant improvements in developer productivity, speed of test execution, and error-free applications deployed to production. We also added several features in this release based on the valuable feedback we received from beta users. The full list of features is documented in the GitHub readme.

Partition APIs

When reading or querying large amounts of data from Cloud Spanner, it can be useful to divide the query into smaller pieces, or partitions, and use multiple machines to fetch the partitions in parallel. The emulator now supports Partition Read, Partition Query, and Partition DML APIs.

Cloud Spanner client libraries

With the GA launch, the latest versions of all the Cloud Spanner client libraries support the emulator. We have added support for C#, Node.js, PHP, Python, Ruby client libraries and the Cloud Spanner JDBC driver. This is in addition to C++, Go and Java client libraries that were already supported with the beta launch. Be sure to check out the minimum version for each of the client libraries that support the emulator.

Use the Getting Started guides to try the emulator with the client library of your choice.

SQL features

Emulator now supports the full set of SQL features provided by Cloud Spanner. Some of the notable additions being support for SQL functions JSON_VALUE, JSON_QUERY, CEILING, POWER, CHARACTER_LENGTH, and FORMAT. We now also support untyped parameter bindings in SQL statements which are used by our client libraries written in languages with dynamic typing e.g., Python, PHP, Node.js and Ruby.

Using Emulator in CI/CD pipelines

You may now point the majority of your existing CI/CD to the Cloud Spanner emulator instead of a real Cloud Spanner instance brought up on GCP. This will save you both cost and time, since an emulator instance comes up instantly and is free to use!

What’s even better is that you can bring up multiple instances in a single execution of the emulator, and of course multiple databases. Thus, tests that interact with a Cloud Spanner database can now run in parallel since each of them can have their own database, making tests hermetic. This can reduce flakiness in unit tests and reduce the number of bugs that can make their way to continuous integration tests or to production.

In case your existing CI/CD architecture assumes the existence of a Cloud Spanner test instance and/or test database against which the tests run, you can achieve similar functionality with the emulator as well. Note that the emulator doesn’t come up with a default instance or a default database as we expect users to create instances and databases as required in their tests for hermeticity as explained above. Below are two examples of how you can bring up an emulator with a default instance or database: 1) By using a docker image or 2) Programmatically.

Starting Emulator from Docker

The emulator can be started using Docker on Linux, MacOS, and Windows. As a prerequisite, you would need to install Docker on your system. To bring up an emulator with a default database/instance, you can execute a shell script in your docker file to do so. Such a script would make RPC calls to CreateInstance and CreateDatabase after bringing up the emulator server. You can also look at this example on how to put this together when using docker.
Run Emulator Programmatically

You can bring up the emulator binary in the same process as your test program. Then you can then create a default instance/database in your ‘Setup’ and clean up the same when the tests are over. Note that the exact procedure for bringing up an ‘in-process’ service may vary with the client library language and platform of your choice.

Other alternatives to start the emulator, including pre-built linux binaries, are listed here.
Try it now

Learn more about Google Cloud Spanner emulator and try it out now.

By Asheesh Agrawal, Google Open Source

via Google Open Source Blog

August 20, 2020 at 01:00AM

New P2P botnet infects SSH servers all over the world

New P2P botnet infects SSH servers all over the world

Cartoon image of a desktop computer under attack from viruses.

Enlarge (credit: Aurich Lawson)

Researchers have found what they believe is a previously undiscovered botnet that uses unusually advanced measures to covertly target millions of servers around the world.

The botnet uses proprietary software written from scratch to infect servers and corral them into a peer-to-peer network, researchers from security firm Guardicore Labs reported on Wednesday. P2P botnets distribute their administration among many infected nodes rather than relying on a control server to send commands and receive pilfered data. With no centralized server, the botnets are generally harder to spot and more difficult to shut down.

“What was intriguing about this campaign was that, at first sight, there was no apparent command and control (CNC) server being connected to,” Guardicore Labs researcher Ophir Harpaz wrote. “It was shortly after the beginning of the research when we understood no CNC existed in the first place.”

Read 9 remaining paragraphs | Comments


via Biz & IT – Ars Technica

August 19, 2020 at 10:22PM

COVID-19 Could Threaten Firefighters As Wildfire Season Ramps Up

COVID-19 Could Threaten Firefighters As Wildfire Season Ramps Up

Jon Paul was leery entering his first wildfire camp of the year late last month to fight three lightning-caused fires scorching parts of a Northern California forest that hadn’t burned in 40 years.

The 54-year-old engine captain from southern Oregon knew from experience that these crowded, grimy camps can be breeding grounds for norovirus and a respiratory illness that firefighters call the “camp crud” in a normal year. He wondered what COVID-19 would do in the tent cities where hundreds of men and women eat, sleep, wash and spend their downtime between shifts.

Paul thought about his immunocompromised wife and his 84-year-old mother back home. Then he joined the approximately 1,300 people spread across the Modoc National Forest who would provide a major test for the COVID-prevention measures that had been developed for wildland firefighters.

“We’re still first responders and we have that responsibility to go and deal with these emergencies,” he says. “I don’t scare easy, but I’m very wary and concerned about my surroundings. I’m still going to work and do my job.”

Paul is one of thousands of firefighters from across the U.S. battling dozens of wildfires burning throughout the West. It’s an inherently dangerous job that now carries the additional risk of COVID-19 transmission. Any outbreak that ripples through a camp could easily sideline crews and spread the virus across multiple fires—and back to communities across the country—as personnel transfer in and out of “hot zones” and return home.

Though most firefighters are young and fit, some will inevitably fall ill in these remote makeshift communities of shared showers and portable toilets, where medical care can be limited. The pollutants in the smoke they breathe daily also make them more susceptible to COVID-19 and can worsen the effects of the disease, according to the U.S. Centers for Disease Control and Prevention.

Also, a single suspected or positive case in a camp will mean many other firefighters will need to be quarantined, unable to work. The worst-case scenario is that multiple outbreaks could hamstring the nation’s ability to respond as wildfire season peaks in August, the hottest month and driest month of the year in the Western U.S.

The number of acres burned so far this year is below the 10-year average, but the fire outlook for August is above average in nine states, according to the National Interagency Fire Center. Twenty-two large fires ignited on Aug. 17 alone after lightning storms passed through the Northwest, and two days later, California declared a state of emergency due to uncontrolled wildfires.

A study published this month by researchers at Colorado State University and the U.S. Forest Service’s Rocky Mountain Research Station concluded that COVID-19 outbreaks “could be a serious threat to the firefighting mission” and urged vigilant social distancing and screening measures in the camps.

“If simultaneous fires incurred outbreaks, the entire wildland response system could be stressed substantially, with a large portion of the workforce quarantined,” the study’s authors wrote.

U.S. Forest Service
U.S. Forest ServiceFirefighters wear face masks at a morning briefing on the Bighorn Fire, north of Tucson, Ariz., on June 22, 2020.

This spring, the National Wildfire Coordinating Group’s Fire Management Board wrote—and has since been updating—protocols to prevent the spread of COVID-19 in fire camps, based on CDC guidelines:

  • Firefighters should be screened for fever and other symptoms when they arrive at camp.
  • Every crew should insulate itself as a “module of one” for the fire season and limit interactions with other crews.
  • Firefighters should maintain social distancing and wear face coverings when social distancing isn’t possible. Smaller satellite camps, known as “spike” camps, can be built to ensure enough space.
  • Shared areas should be regularly cleaned and disinfected, and sharing tools and radios should be minimized.

The guidelines do not include routine testing of newly arrived firefighters—a practice used for athletes at training camps and students returning to college campuses. The Fire Management Board’s Wildland Fire Medical and Public Health Advisory Team wrote in a July 2 memo that it “does not recommend utilizing universal COVID-19 laboratory testing as a standalone risk mitigation or screening measure among wildland firefighters.” Rather, the group recommends testing an individual and directly exposed co-workers, saying that approach is in line with CDC guidance.

The lack of testing capacity and long turnaround times are factors, according to Forest Service spokesperson Dan Hottle. (The exception is Alaska, where firefighters are tested upon arrival at the airport and are quarantined in a hotel while awaiting results, which come in 24 hours, Hottle says.)

Fire crews responding to early season fires in the spring had some problems adjusting to the new protocols, according to assessments written by fire leaders and compiled by the Wildland Fire Lessons Learned Center. Shawn Faiella, superintendent of the interagency “hotshot crew” – so named because they work the most challenging, or “hottest” parts of wildfires — based at Montana’s Lolo National Forest, questioned the need to wear masks inside vehicles and the safety of bringing extra vehicles to space out firefighters traveling to a blaze. Parking extra vehicles at the scene of a fire is difficult in tight forest dirt roads—and would be dangerous if evacuations are necessary, he wrote.

“It’s damn tough to take these practices to the fire line,” Faiella wrote after his team responded to a 40-acre Montana fire in April.

One recommendation that fire managers say has been particularly effective is the “module of one” concept requiring crews to eat and sleep together in isolation for the entire fire season. “Whoever came up with it, it is working,” says Mike Goicoechea, the Montana-based incident commander for the Forest Service’s Northern Region Type 1 team, which manages the nation’s largest and most complex wildfires and natural disasters. “Somebody may test positive, and you end up having to take that module out of service for 14 days. But the nice part is you’re not taking out a whole camp.… It’s just that module.”

There is no single system that is tracking the total number of positive COVID-19 cases among wildland firefighters among the various federal, state, local and tribal agencies. Each fire agency has its own method, says Jessica Gardetto, a spokesperson for the Bureau of Land Management and the National Interagency Fire Center in Idaho.

The largest wildland firefighting agency in the U.S. is the Agriculture Department’s Forest Service, with 10,000 firefighters. Another major agency is the Department of the Interior, which had more than 3,500 full-time fire employees last year. As of the first week of August, 111 Forest Service firefighters and 40 BLM firefighters (who work underneath the broader Interior Department agency) had tested positive for COVID-19, according to officials for the respective agencies. “Considering we’ve now been experiencing fire activity for several months, this number is surprisingly low if you think about the thousands of fire personnel who’ve been suppressing wildfires this summer,” Gardetto says.

Goicoechea and his Montana team traveled north of Tucson, Arizona, on June 22 to manage a rapidly spreading fire in the Santa Catalina Mountains that required 1,200 responders at its peak. Within two days of the team’s arrival, his managers were overwhelmed by calls from firefighters worried or with questions about preventing the spread of COVID-19 or carrying the virus home to their families.

In an unusual move, Goicoechea called upon a Montana physician—and former National Park Service ranger with wildfire experience—Dr. Harry Sibold to join the team. Physicians are rarely, if ever, part of a wildfire camp’s medical team, Goicoechea says. Sibold gave regular coronavirus updates during morning briefings, consulted with local health officials, soothed firefighters worried about bringing the virus home to their families and advised fire managers on how to handle scenarios that might come up.

But Sibold says he wasn’t optimistic at the beginning about keeping the coronavirus in check in a large camp in Pima County, which has the second-highest number of confirmed cases in Arizona, at the time a national COVID-19 hot spot. “I quite firmly expected that we might have two or three outbreaks,” he says.

There were no positive cases during the team’s two-week deployment, just three or four cases where a firefighter showed symptoms but tested negative for the virus. After the Montana team returned home, nine firefighters at the Arizona fire from other units tested positive, Goicoechea says. Contact tracers notified the Montana team, some of whom were tested. All tests returned negative.

“I can’t say enough about having that doctor to help,” Goicoechea says, suggesting other teams might consider doing the same. “We’re not the experts in a pandemic. We’re the experts with fire.”

That early success will be tested as the number of fires increase across the West, along with the number of firefighters responding to them. There were more than 15,000 firefighters and support personnel assigned to fires across the nation as of mid-August, and the success of those COVID-19 prevention protocols depend largely upon them.

Paul, the Oregon firefighter, says that the guidelines were followed closely in camp, but less so out on the fire line. It also appeared to him that younger firefighters were less likely to follow the masking and social-distancing rules than the veterans like him. That worries him it wouldn’t take much to spark an outbreak that could sideline crews and cripple the ability to respond to a fire. “We’re outside, so it definitely helps with mitigation and makes it simpler to social distance,” Paul says. “But I think if there’s a mistake made, it could happen.”

KHN (Kaiser Health News) is a nonprofit news service covering health issues. It is an editorially independent program of KFF (Kaiser Family Foundation) that is not affiliated with Kaiser Permanente.


via Top Science and Health Stories

August 20, 2020 at 12:57AM

Researchers discover novel molecular mechanism that enables conifers to adapt to winter

Researchers discover novel molecular mechanism that enables conifers to adapt to winter

Unlike broadleaf trees, conifers are evergreen and retain their photosynthesis structure throughout the year. Especially in late winter, the combination of freezing temperatures and high light intensity exposes the needles to oxidative damage that could lead to the destruction of molecules and cell structures that contribute to photosynthesis. Researchers have discovered a previously unknown mechanism that enables spruce trees to adapt to winter.

via ScienceDaily: Biotechnology News

August 20, 2020 at 01:39AM

Mounting poisonings, blindness, deaths as toxic hand sanitizers flood market

Mounting poisonings, blindness, deaths as toxic hand sanitizers flood market

A gloved hand dispenses goo into an open bare hand.

Enlarge / Hand sanitizer being applied to a person’s hand. (credit: Getty | Leopoldo Smith)

The Food and Drug Administration is renewing warnings this week of dangerous hand sanitizers as it continues to find products that contain toxic methanol—a poisonous alcohol that can cause systemic effects, blindness, and death.

The agency’s growing “do-not-use list” of dangerous sanitizers now includes 87 products. And with the mounting tally, the FDA also says there are rising reports from state health departments and poison control centers of injuries and deaths.

“We remain extremely concerned about the potential serious risks of alcohol-based hand sanitizers containing methanol,” said FDA Commissioner Stephen M. Hahn in a statement.

Read 5 remaining paragraphs | Comments


via Science – Ars Technica

July 29, 2020 at 07:45AM

Help the World by Healing Your NGINX Configuration

Help the World by Healing Your NGINX Configuration

In his famous speech at the University of Texas in 2014, Admiral William H. McRaven said that if you want to change the world, start off by making your bed. Sometimes small things can have a big impact – whether it’s making your bed in the morning or making few changes to your website’s HTTP server configuration.

Does that seem like an overstatement? The first months of 2020 have flushed down the drain all definitions of what’s normal and reasonable in our world. With almost half of the Earth’s population locked down in their homes due to the COVID‑19 pandemic, the Internet has become their only mode of communication, entertainment, buying food, working, and education. And each week the Internet is seeing higher network traffic and server load than ever before. According to a report published by BroadbandNow on March 25, “Eighty eight (44%) of the 200 cities we analyzed have experienced some degree of network degradation over the past week compared to the 10 weeks prior”.

Major media platforms like Netflix and YouTube are limiting the quality of their transmissions in order to protect network links, making more bandwidth available for people to work, communicate with their families, or attend virtual lessons at their school. But still this is not enough, as network quality gradually worsens and many servers become overloaded.

You Can Help by Optimizing Your Website

If you own a website and can manage its HTTP server configuration, you can help. A few small changes can reduce the network bandwidth generated by your users and the load on servers. It’s a win‑win situation: if your site is currently under heavy load, you can reduce it, enabling you to serve more users and possibly lowering your costs. If it’s not under high load, faster loading improves your users’ experience (and sometimes positively affects your position in Google search results).

It doesn’t really matter if you have an application with millions of users each month or a small blog with baking recipes – every kilobyte of network traffic you eliminate frees capacity for someone who desperately needs to check medical testing results online or create a parcel label to send something important to relatives.

In this blog we present a few simple but powerful changes you can make to your NGINX configuration. As a real‑world example, we use the e‑commerce site of our friends at Rogalove, an ecological cosmetics manufacturer here in Poland where I live. The site is a fairly standard WooCommerce installation running NGINX 1.15.9 as its web server. For the sake of our calculations, we assume the site gets 100 unique users per day, 30% of users are recurring visitors, and each user accesses an average of 4 pages during a session.

These tips are simple steps you can take right away to improve performance and reduce network bandwidth. If you’re handling large volumes of traffic, you probably need to implement more complex changes to make a significant impact, for example tuning the operating system and NGINX, provisioning the right hardware capacity, and – most importantly – enabling and tuning caching. Check out these blog posts for details:

Enabling Gzip Compression for HTML, CSS, and JavaScript Files

As you may know, the HTML, CSS, and JavaScript files used to build pages on modern websites can be really huge. In most situations, web servers can compress these and other text files on the fly to conserve network bandwidth.

One way to see if a web server is compressing files is with the browser’s developer tools. For many browsers, you access the tools with the F12 key and the relevant information is on the Network tab. Here’s an example:

As you see at the bottom left, there is no compression: the text files are 1.15 MB in size and that much data was transferred.

By default, compression is disabled in NGINX but depending on your installation or Linux distribution, some settings might be enabled in the default nginx.conf file. Here we enable gzip compression in the NGINX configuration file:

gzip on;
 gzip_types application/xml application/json text/css text/javascript application/javascript;
 gzip_vary on;
 gzip_comp_level 6;
 gzip_min_length 500;

As you see in the following screenshot, with compression the data transfer goes down to only 260 KB – a reduction of about 80%! For each new user on your page, you save about 917 KB of data transfer. For our WooCommerce installation that’s 62 MB a day, or 1860 MB a month.

Setting Cache Headers

When a browser retrieves a file for a web page, it keeps a copy in a local on‑disk cache so that it doesn’t have to refetch the file from the server when you visit the page again. Each browser uses its own logic to decide when to use a local copy of a file and when to fetch it again in case it has changed on the server. But as the website owner, you can set cache control and expiration headers in the HTTP responses you send, to make the browser’s caching behavior more efficient. In the long term you get many fewer unnecessary HTTP requests.

For a good start, you can set a long cache expiration time for fonts and images, which probably do not change often (and even if they change, they usually get a new file name). In the following example we instruct the client browser to keep fonts and images in the local cache for a month:

location ~* \.(?:jpg|jpeg|gif|png|ico|woff2)$ {
 expires 1M;
 add_header Cache-Control "public";

Enabling HTTP/2 Protocol Support

HTTP/2 is a next‑generation protocol for serving web pages, designed for better network and host‑server utilization. According to the Google documentation, it enables much faster page loading:

The resulting protocol is more friendly to the network, because fewer TCP connections are used in comparison to HTTP/1.x. This means less competition with other flows, and longer‑lived connections, which in turn leads to better utilization of available network capacity.

NGINX 1.9.5 and later (and NGINX Plus R7 and later) supports the HTTP/2 protocol, and all you need to do is to enable it 😀. To do so, include the http2 parameter on the listen directives in your NGINX configuration files:

listen 443 ssl http2;

Note that in most cases, you also need to enable TLS to use HTTP/2.

You can verify that your (or any) site supports HTTP/2 with the HTTP2.Pro service:

Optimizing Logging

Make yourself a cup of your favorite beverage, sit comfortably, and think: when was the last time you looked at your access log file? Last week, last month, never? Even if you use it for day-to-day monitoring of your site, you probably focus only on errors (400 and 500 status codes, and so on), not successful requests.

By reducing or eliminating unnecessary logging, you save disk storage, CPU, and I/O operations on your server. This not only makes your server a little faster – if you’re deployed in a cloud environment, the freed‑up I/O throughput and CPU cycles might be a life saver for another virtual machine or application residing on the same physical machine.

There are several different ways to reduce and optimize logging. Here we highlight three.

Method 1: Disable Logging of Requests for Page Resources

This is a quick and easy solution if you don’t need to log requests that retrieve ordinary page resources such as images, JavaScript files, and CSS files. All you need to do is to create a new location block that matches those file types, and disable logging inside it. (You can also add this access_log directive to the location block above where we set the Cache-Control header .)

location ~* \.(?:jpg|jpeg|gif|png|ico|woff2|js|css)$ {
 access_log off;

Method 2: Disable Logging of Successful Requests

This is a more powerful method because it discards queries with a 2xx or 3xx response code, logging only errors. It is slightly more complicated than Method 1 because it depends on how your NGINX logging is configured. In our example we use the standard nginx.conf included in Ubuntu Server distributions, so that regardless of the virtual host all requests are logged to /var/log/nginx/access.log.

Using an example from the official NGINX documentation let’s turn on conditional logging. Create a variable $loggable and set it to 0 for requests with 2xx and 3xx response codes, and otherwise to 1. Then reference this variable as a condition in the access_log directive.

Here’s the original directive in the http context in /etc/nginx/nginx.conf:

access_log /var/log/nginx/access.log;

Add a map block and reference it from the access_log directive:

map $status $loggable {
 ~^[23] 0;
 default 1;
 access_log /var/log/nginx/access.log combined if=$loggable;

Note that although combined is the default log format, you need to specify it explicitly when including the if parameter.

Method 3: Minimizing I/O Operations with Buffering

Even if you want to log all requests you can minimize I/O operations by turning on access log buffering. With this directive NGINX waits to write log data to disk until a 512-KB buffer is filled or 1 minute has passed since the last flush, whichever occurs first.

access_log /var/log/nginx/access.log combined buffer=512k flush=1m;

Limiting Bandwidth for Particular URLs

If your server provides larger files (or smaller but extremely popular files, like forms or reports), it can be useful to set the maximum speed at which clients can download them. If your site is already experiencing a high network load, limiting download speed leaves more bandwidth to keep critical parts of your application responsive. This is a very popular solution used by hardware manufacturers – you may wait longer to download a 3-GB driver for your printer, but with thousands of other people downloading at the same time you’ll still able to get your download. 😉

Use the limit_rate directive to limit bandwidth for a particular URL. Here we’re limiting the transfer rate for each file under /download to 50 KB per second.

location /download/ {
 limit_rate 50k;

You might also want to rate‑limit only larger files, which you can do with the limit_rate_after directive. In this example the first 500 KB of every file (from any directory) is transferred without speed restrictions, with everything after that capped at 50 KB/s. This enables faster delivery of critical parts of the website while slowing down others.

location / {
 limit_rate_after 500k;
 limit_rate 50k;

Note that rate limits apply to individual HTTP connections between a browser and NGINX, and so don’t prevent users from getting around rate limits by using download managers.

Lastly, you can also limit the number of concurrent connections to your server or the rate of request. For details, see our documentation.


We hope that those five tips help optimize your site’s performance. Speed and bandwidth gains vary by website. Even if tuning your NGINX configuration doesn’t seem to significantly free up bandwidth or improve speed, the overall impact of thousands of websites individually tweaking their NGINX configuration adds up. Our global network is used more efficiently, meaning that the most critical services are delivered when needed.

If you’re having any issues with NGINX at your site, we’re here to help! During the COVID‑19 pandemic, NGINX employees and the community are monitoring the NGINX channel on Stack Overflow1 and responding to questions and requests as quickly as possible.

If you work for an organization on the frontlines of the pandemic and have advanced needs, you may qualify for up to five free NGINX Plus licenses as well as a higher tier of F5 DNS Load Balancer Cloud Service. See Free Resources for Websites Impacted by COVID‑19 for details.

Also check out that blog for a rundown of other easy ways to improve website performance with free resources from NGINX and F5.

1Stack Overflow is a third‑party website and is not affiliated with F5. Inc. F5 and its affiliates disclaim any liability for content (including general information and proposed solutions to questions) posted on Stack Overflow or any other third‑party website.



April 22, 2020 at 04:34AM