Saturday, January 30, 2021

The Batman Cometh (covid)

 In my previous covid post The Third Wave, I made the somewhat cavalier statement that instead of having a 1% chance of dying next year, with covid you had a 1.1% chance.

Now the third wave has fully developed into if anything a bit bigger than the first. Is that assessment worth revisiting?

I think it is, not so much because it was wrong (we'll see) but because it was too general. In particular, mortality from covid depends drastically upon age. Here are deaths by age group from the CDC:


The third wave has developed into an eerily appropriate Batman's second ear. There several things to note about this chart. The red line is 2020, the gray are previous years. But they are not death rates; they are death counts. Death rates climb much more steeply by age. Furthermore, the divisions are not comparable: look at the middle two. 45-64 is a 20-year cohort; 65-74 is only 10. So the apparent indicated rate in the latter should be twice the former.

Even so, it is clear (and quite true) that the effect of covid is bigger with advancing age. It rises from essentially nothing under 45 to about a 30% extra chance of dying (during any given period) at 90.

We can make this more apparent by taking actual actuarial figures and plotting the effect of the extra risk. 

The blue curve is the fraction of the population who (barely) reach a given age. The area under the curve is life expectancy at birth.

The orange is the same curve with covid, calculated with the excess death rates of late December.

That's important, so I'll repeat it. The orange curve assumes that the covid death rates continue all next year at the level of the peak of the third wave.

So what about that 1.1% instead of 1% chance of dying next year? As of now, roughly 0.1% of the US population has already died of covid. If it runs unchecked through the rest of the population, based on Diamond Princess numbers, it will claim another 0.2%. (That would amount to about 1.2 million deaths; we are already at 400,000.)

But critically, the risk isn't linear. If you're under 40, your risk of dying next year is less than 0.2% and covid does not appreciably increase it at all. If you are 70, your normal chance of dying next year would be 2.5%, and covid increases that to 2.9%. That is significant.

The bottom line here is that there simply isn't any one-size-fits-all response to the situation. 



Thursday, January 28, 2021

The Greenhouse Effect

 There is a vast amount of disinformation out there about the greenhouse effect. This is mostly because it's a highly politicized bit of science. If not, there wouldn't be much information about it of any kind, like rates of stress-induced creep in iron-terbium alloys.

But instead, as Mark Twain said, it's not what you don't know that gets you, it's what you know that ain't so. And there is a YUGE amount of stuff that ain't so about the greenhouse effect on the internet. This ranges from the "we're all gonna die" idiocy of the far left to the "there's no such thing as the greenhouse effect" of the far right. (It was one of the latter that goaded me into writing this post.)

So what is the greenhouse effect? It's a term used by astronomers to refer to some of the major phenomena of a planetary atmosphere. The easiest way to see it is to compare two planets, one with and one without an atmosphere. Luckily, we happen to have two such planets handy: the Earth and the Moon. As an added bonus, they are the same distance from the Sun and thus get the same amount of sunlight.


So here's Buzz Aldrin standing on the Moon. Behind him you see the shadow of the Lunar Module. The temperature of the ground in shadow is 160 degrees below zero Fahrenheit. The temperature of the ground he's standing on is plus 180. He's wearing those thick-soled shoes for a reason.



Here's the greenhouse effect in one picture. In red we see the temperature on the Moon over the course of a year. It varies, top to bottom, by roughly 300C or 540F. 

The blue line is the temperature where I live, measured every 20 minutes at the local airport. The day/night variation is more like 20 degrees. (There is also a seasonal variation that the Moon doesn't have, due to the Earth's axial tilt.)

The difference between them is the greenhouse effect. Without the greenhouse effect, we would freeze to cryogenic temperatures every night, and literally boil every day. Without the greenhouse effect, we could not live and would never have evolved.

What is the major effect of the greenhouse effect? You may have heard that it is to raise temperatures. Wrong. Look at the graph again. The major, obvious, effect is that the temperatures on the Moon vary 25 times more than the ones on Earth. The greenhouse effect compresses temperatures. As a secondary effect, it compresses them toward a level that is about two thirds up the Moon's scale.

How can that be? You will have heard that the greenhouse effect is like a blanket around you on a cold night. This is a misleading metaphor. You generate heat internally, so insulating you from a cold environment will warm you up. The Earth generates heat as well, but the amount of it is minor compared to the heat it gets from the Sun. So the appropriate metaphor is to put a blanket around you while you are standing next to a big hot fire. Perhaps you have done this in real life; I have. The blanket warms your back but cools your front. And that is what having an atmosphere does to a planet.

So what about the part where sunlight comes in one atmospheric window, but goes out another, and we're closing the outgoing one? 

Here's a chart of how this works. Scattering (which makes the sky blue) and ozone protect us from hard UV (another of the many things that would kill you on the Moon). Only about 3/4 of the Sun's light gets to the surface. Most is visible light but there is a substantial amount of infrared as well. You can feel warmth from the Sun on your face and other bare skin. There are plenty of places on Earth where the heat goes back out through the near-IR bands as well; you'd feel the heat from the sands of the Sahara or central Australia too!

Even so, the majority goes out through the 10-micron atmospheric window as well. The three curves there represent the patterns of energy as a function of wavelength for -82F (black), +8F (blue), and 98F (purple) respectively. All of the Earth is somewhere in this range, so most of the thermal radiation it produces will go out this "atmospheric window."

The atmospheric window is the gap in the absorption spectrum of water as seen on the top "major components" line. We can put it in perspective with the other energy flows through the atmosphere with this chart (with its two authors):


The atmospheric window is shown in red over near the right side of the graphic.

As we noted, the atmospheric window is a function of the moisture content of the air. Chances are you have personal experience of this too. There are moist cloudy nights where the temperature hardly drops at all. There are clear, cold, nights where there is hardly any water in the air.

The atmospheric window is smallest when it is moist:

(generated by HITRAN, the standard program/dataset for gaseous absorption. The horizontal scale is frequency measured in kaysers.)

And slightly less moist. The blue is of course absorption by water. The green you see on the left side is CO2.

And finally, very dry air as over the Sahara or the Arctic. It should be clear that the CO2 is more important when the atmospheric window is bigger, i.e. when there is less of the major, water based, greenhouse effect.

But we have been dumping CO2 into the atmosphere at unprecedented rates. What has that done to the window?

Here is the absorption spectrum of CO2 in the region of interest:

It's centered at about 670 kaysers, which is why it is hiding off to the left in the above pictures of the atmospheric window, which is centered at 1000. Here it again, but I have added the amount of extra absorption corresponding to the extra CO2 that has been added since 1900 (in red):


Wow, is that all? In a word, yes. The response to CO2 is logarithmic, meaning that absorption is proportional to the logarithm of the amount, and not to the amount itself. The reason is, as you can see, the band that CO2 absorbs is already mostly saturated. The amount of carbon in the air over a square foot of earth's surface is roughly equivalent to a square foot of black wool cloth. If it were carbon soot, the sky would be completely solid black. But it isn't; it's not carbon but carbon dioxide, which is transparent at most frequencies except this one. 
Only at the frequencies near the edges, where CO2 is translucent, does adding any more of it make much difference.

So here is the atmospheric window, this time with the CO2 we've added in the past century:
Humid air:
Moist:
Fair:
Arid:

So, the bottom line for CO2 is that it only makes much of a difference when the atmospheric window is already at its biggest, and the actual greenhouse effect is almost entirely due to water. And given that three quarters of the Earth is ocean, there's not much we could do about that. 
Even if we wanted to.
We would expect the effect of CO2 to appear primarily where there is not much water in the air. In other words, deserts and places where the water freezes out. Have a look:

As a bottom line, let repeat: the greenhouse effect is very real. It is absolutely desperately important for any life on earth: without it, the very air would freeze every night. It is 99.9% natural. It's been here since the earth had an atmosphere and liquid water. 
And that's a wrap!











Friday, January 22, 2021

The Henry Adams Curve: a closer look

From Where is my Flying Car, chapter 3: 

 Running on Empty

We could have predicted over the last few years what the American government's policies on oil and natural gas would be if we had assumed that the aim of the American government was to increase the power and income of the OPEC countries and to reduce the standard of living in the United States. 

—(Economics Nobel laureate) Ronald Coase (1981)

Henry Adams, scion of the house of the two eponymous Presidents, wrote in his autobiography about a century ago:

The coal-output of the world, speaking roughly, doubled every ten years between 1840 and 1900, in the form of utilized power, for the ton of coal yielded three or four times as much power in 1900 as in 1840. Rapid as this rate of acceleration in volume seems, it may be tested in a thousand ways without greatly reducing it. Perhaps the ocean steamer is nearest unity and easiest to measure, for any one might hire, in 1905, for a small sum of money, the use of 30,000 steam-horse-power to cross the ocean, and by halving this figure every ten years, he got back to 234 horse-power for 1835, which was accuracy enough for his purposes. 

In other words, we have had a very long-term trend in history going back at least to the Newcomen and Savery engines of 300 years ago, a steady trend of about 7% per year growth in usable energy available to our civilization. Let us call it the “Henry Adams Curve.” The optimism and constant improvement of life in the 19th and first half of the 20th centuries can quite readily be seen as predicated on it. To a first approximation, it can be factored into a 3% population growth rate, a 2% energy efficiency growth rate, and a 2% growth in actual energy consumed per capita. 

Here is the Henry Adams Curve, the centuries-long historical trend, as the smooth red line. Since the scale is power per capita, this is only the 2% component. The blue curve is actual energy use in the US, which up to the 70s matched the trend quite well. But then energy consumption flatlined.


This is the text where I introduce and illustrate what I call the Henry Adams Curve. But as it is one of the key points in the book, it bears some further looking into. The history of American energy use is interesting and complex, and has gone through several major phases. The graph above, and all the others you'll see in this post, came from the data behind this graph at the EIA:


The first point of interest is the deep history of energy in the 1700s. For nearly a century on the graph, the main energy source is listed as wood. In fact all the energy that did useful work was from food, and almost all the lighting was from tallow wax, neither of which is listed. Wood was used for heating, but as you will learn if you ever visit Monticello, when Thomas Jefferson would wake up on a winter's day, he would have to break the ice on his bedside washbasin to wash his face. (King Henry II did the same, as seen in the classic film Lion in Winter; that's just the way things worked.)

And yet the EIA lists a LOT of wood being used in the early days. This is more clear when you divide their numbers (in quadrillion BTUs) by population and convert to kilowatts per capita:

Brown is wood, black coal
If we take this at face value, for the latter half of the 19th century, that Adams was writing about, coal simply swapped for wood, and before that there had long been 3kW of wood per person.

But if that was the case, what on Earth was Henry talking about? Well, for example, in 1800 you went to England on a wooden sailboat, and in 1900 you went on a coal-powered steamship. Big difference. Doesn't show up in the raw energy numbers.
The bottom line is that using wood as fungible with energy is very problematic. In early days, the vast majority of woodcutting was not for firewood; it was to clear land for farming. (And those farms provided the real fuel, i.e. food.) Wood didn't save work; it cost a lot of work. If I clear land with a controlled burn, that wood simply doesn't count as energy fuel. 
So we need to think of the Henry Adams Curve primarily in terms of coal and subsequent fossil fuels. They were useful; they let us do things we couldn't before; they powered the Industrial Revolution. Henry Adams lived through the flourishing of the Industrial Revolution in America, and that is what he was talking about.

So here's the curve again, just fossils this time. We can zoom in on the Victorian period that Henry was talking about and it has a remarkable fit to an exponential growth curve (and remember we are talking per capita):
Wow. --and guess what: this isn't a 2% growth rate, it's a 4.2% growth rate. This curve was almost entirely coal-driven. The inflection point where it begins to flatline is exactly 1911: World War I. 
For most of the rest of the 20th century coal stayed more or less flat, but another curve based on oil, gas, and nuclear succeeded it:

Again, quite a good fit for those years; the big dip is of course the Great Depression.
The growth rate of the second curve isn't 2% either: it's 5.7%.
(Above a 0 level of 5kW)

You can see that the 2% curve (green) I used as the Henry Adams Curve in the book is a trend averaged from both of these.

If we had stayed on even the second curve, we wouldn't be using just 2.5 times as much energy, we'd be using 5 times. And as I argue in the book, that should have been quite technologically feasible.

If only we had wanted to.


Sunday, January 17, 2021

How to build a flying car IV: optimal control

 Let's have a look at the landscape of P-D controllers again:

You will remember it plots the standard deviation of the error, i.e. the difference of where the car was from where we wanted it, above a plane defined by just how hard we push given an error (left side axis) and the slope of the line above which we measure the error (right side). (Note that these numbers are just the coefficients of P and D in the standard formulation of a P-D controller.)

The valley indicating the best results appears to favor higher proportionality and lower, but non-zero, slope. But you can't increase proportionality forever; that would imply infinite force, and your motor is only capable of so much. Which should lead you to guess that we are going to have to look at bang-bang control again.

Here's the phase space again, with a controller that is stuck trying to go up as hard as it can. If, for example, you are sitting perfectly at the origin, position and velocity both zero, you would accelerate upward ad infinitum, to ever-higher altitudes and speeds.

That's represented by the red line in the upper right. But the line we are interested in is the blue one. It represents what happens when you are above the desired point, but decelerate as hard as you can. If you happen to be on the blue line, you will come in to the perfect one-point landing, hitting both zero altitude and zero speed exactly with no spiraling in.

Let's look at that quadrant more closely, pretending we are controlling a lunar lander on its way down to the moon:

Again, if you are on the blue line, perfect landing. If you are below it, you are too low and/or too fast, and you are S.O.L. -- you are going to hit no matter what. On the other hand, if you are above the blue line, you aren't going to hit at all. Remember the engine is on full pushing you up; you're just doing a takeoff from midair.

That's not what you wanted; you want to land. So what do you do?

You're hanging in space over the moon, and you need to get down. So you blast down as hard as you can until you are on the blue line, and then decelerate as hard as you can for a perfect landing. It should be fairly obvious with a little thought that this is the fastest way to get to the zero-zero point.

That is the essence of optimal control. It actually works from anywhere in the phase space, is the fastest possible way to get to zero, and gets you there in just two strokes, with no spiraling in.

In the parlance, the blue line is called the "switching curve."

That is, if your thrust and timing are perfectly exact, and there is no wind. Even with wind, it works quite well, although you hunt around the setpoint a bit:

The reason for the hunting or dithering is subtle. The switching curve is two halves of a parabola, and the closer you get to the origin the closer that gets to a straight line:

... and what's more, the closer the straight line is to absolutely flat! That means that within some small distance from the origin, we lose derivative control and the system orbits, like this:
We can get it back by having a hybrid controller that is mostly bang-bang optimal, but in a small band next to the switching curve, is P-D. We expand the switching curve like this:

placing the proportional region above it on the left, below on the right, and crossing in the middle, preserving an angle where the curve goes flat. The control signal as a function of the phase plane now looks like this:

This works quite well. In a run with no wind,

it does just about what we want from the optimal control in most of the space, with a very fast spiral at the end:


It does quite well with wind, too; in this graph orange is wind, green is the control signal, and blue the result. 

You can see that big sustained gusts push the system into bang-bang mode, but the rest of the time it maintains a nice (and smooth) hold on position with a proportional response. 

In phase space the result is a tight locus most of the time, with the excursions caused by major gusts exhibiting the characteristic clamshell orbits of optimal control.

There is one other phenomenon we need to look at, and that is lag. We noted way back that lag was the major problem for a controller, introducing derivative control to counter it. How well does our hybrid controller stand up to lag?

Here's a map of performance of a hybrid controller on a landscape with the width of the switching-curve band going up to the left and the amount of lag going up to the right. You can see that the controller gets better as it gets closer to the pure optimal controller, i.e. with a thinner band, except that lag just completely kills it. 

Luckily it's a fairly simple relationship, and if you know what to look for, it's not hard to design a good controller for the particular parameters of your system.

There's a lot more to control theory; I've barely scratched the surface here. Hopefully I have given you enough intuition to get a leg up on simple control system design problems.













Saturday, January 16, 2021

How to Build a Flying Car,III: Phase Space

 How to Build a Flying Car: an occasional series of posts that examine various points of interest.

We finished the last post looking at a controller that didn't work: a bang-bang that simply always ran the motor as hard as possible in a direction towards the goal state. We hinted that it might be possible with a controller that had a more proportional response: turning on the motor to a degree proportional to the distance from the goal. 

This is, after all, exactly what the Watt spinning-ball steam engine governor did. And that worked pretty well!

But that would be to make a subtle mistake. To understand why, we need a tool that is basically just another way of looking at the process. What we are going to do is plot the above graph again, but instead of plotting the altitude against time, we are going to plot it against vertical speed. Physicists call such a chart a phase space, and there is a good reason why this is a good name, but we will ignore it for the moment.

So here is the very same run as above, but plotted in the phase space. The vertical axis is exactly the same, but instead of plotting time running left to right, we plot whatever the vertical speed happens to be on the horizontal axis:


(The system is moving counterclockwise, i.e. getting further and further from the origin. This is bad!)

Note several things: all the segments are parts of parabolas. Segments in the top half are uniform acceleration downwards, and segments in the bottom half are uniform acceleration upward (plus a bit of random "wind".)

Every trace crosses the y-axis at right angles every time. See if you can figure out why this has to be the case for the y-axis, but not for the x-axis.

But the most important point is slightly subtle: the points where the controller switches acceleration don't lie on the x-axis! They're on a line that slopes ever so slightly down to the left and up to the right.

This is due to lag in the controller. Like Wiley Coyote walking in the air a few steps before he realizes he's walked over a cliff, the controller runs a little bit across the axis before it realizes it needs to switch. It's this lag that makes each parabolic segment a little bigger than the previous one, with the unfortunate consequence of wandering off to infinity.

It is NOT the bang-bang nature of control that causes this instability; it's the lag. To investigate this, let's use a proportional controller. The only difference in the phase space view is that the cycles will be circles rather than almond-shaped:


Phase space map for a proportional controller with zero lag.

If you get the lag to be exactly zero, then the system will plot out an exact circle in the phase space, neither expanding nor contracting. Here's a plot of a system with some lag and with as close as I could tune it to none:

Note that the lagged run does suffer from increasing oscillations and will ultimately run away.

By now the solution should be obvious: mathematically induce some negative lag. How do we do this?



It is simplicity itself if you are thinking in phase spaces. Instead of making the control response proportional to the distance above the zero line, make it proportional to the distance above this tilted line instead. Then, in the phase space, the track of the system will tend to spiral into the center. In practice, if you get the slope of the line and the constant of proportionality right, it really works incredibly well. 

Here's a run with the same scale of wind as the above but with the control parameters tuned just right.



 And just to show off, here, on the same scale, is the phase space track of the same run:

Maxwell Smart couldn't have done better.

How did I find those optimal values for the line tilt and proportionality constant? Brute force. I simply ran a lot of cases and measured the standard deviation of the result, picking the smallest. It helps to visualize the result as a valley in parameter space:


... which may help you in designing systems and knowing what results to expect. Note that the surface is wrinkly due to the random "wind."

A couple of other notes: if you read the control theory literature, typically they put the system position, which they call "error," on the x-axis and velocity on the y-axis, and the system moves clockwise. I used the scheme here because we are actually interested in a vertical parameter, and the same variable is on the vertical axis in time-series charts.

In standard control theory distance from the goal state is called "error" and the proportional signal is proportional to that. What we called velocity is called "derivative," as it is the derivative of the error function (and the system is often not a moving object with a speed). So the formulation here is called a proportional-derivative or P-D controller.

Finally, the derivative term, or the slope of the line in our formulation, acts as a force against the motion that increases with speed; in other words it acts like friction (in fancy parlance, "damping"). The reason that Watt's governor worked so well was because it had enough real friction to create a converging phase space!

Next up: Optimal Control!

Friday, January 15, 2021

How to Build a Flying Car,II: Control Theory

How to Build a Flying Car: an occasional series of posts that examine various points of interest.

Here you are in your flying car. You are at 10,000 feet and need to go somewhere. You have 50 thruster fans pointed down through the floor of your car and 10 more pointed back. What you need now is some way to tell all of them how much to thrust. You don't have 50 hands, and even if you did, you don't have the brainpower to use them all properly to avoid the fate of the VertiJet's test pilots.

You need control theory.

Control theory has been a substantial part of engineering since Jame Clerk Maxwell published On Governors in the Proceedings of the Royal Society in 1868. This was the day of the steam engine and the Watt governor, descended from the spinning-ball governors used in windmills, which kept the engines running at an even speed.


The Watt governor is fairly simple to understand; the speed of the engine makes the balls try to stand out by centrifugal force, and thus to pull up on a yoke that cuts off the steam supply. There is a happy medium where the supply and speed match just right and so that is the speed the engine will try to run. Setting the balls and the steam-valve right was something of an art, it was an attempt to reduce this to a science that was Maxwell's aim.

Maxwell's analysis sets the stage for another trend in control theory: it contains no less than 73 mathematical formulae and equations. And I can assure you that it is easier to read than anything published on the subject in the 20th century.

What I would like to do in the next few posts is try to give an overview of a small part of control theory, nowhere near what an engineering student would learn in the course of a degree, but enough to  give you an intuitive grasp of what is going on. Note that even this little part is usually covered using differential equations, Laplace transforms, and their maps in the complex plane; but I will strive to avoid that here.

So, here is the problem. Here is your flying car, and I have set the fans so that the thrust is exactly equal to the weight of the car. It ought to hang there motionlessly. But there is also a disturbing force, which here I have labelled "wind" and modeled with a randomly varying force. 


The wind is the wiggly orange and green lines at about 0, from two separate runs. The blue and red lines, respectively, represent the altitude of the car. As you can see, the car doesn't stay put. Whenever it gets a couple of seconds of sustained gusts, it starts moving, and its momentum is enough to keep it going even though the wind returns to an average of zero.  

So what's a flying car to do? A first attempt is to use a controller of a kind we're all familiar with: it's like the thermostat in your home. It's either on or off (or in a more sophisticated version, turns on the heat if it's too cold, and the air conditioner if it's too hot.) But full force in either case. For that reason, a controller of this kind is often referred to as a "bang-bang" controller. For heating and cooling, this actually works rather well. But for our flying car, we have a little problem:


 The scale of the wind is the same here as before. We have drawn in orange the signal for control either pushing up or pushing down. But the blue signal, the car's altitude, is engaging in sickening leaps and plunges, which are getting worse as time goes on!

What's happening, of course, is that the corrective force acting all the time the car is above its intended altitude, is pushing it so hard that by the time it gets back to zero, it's going down so fast it can't stop. For heating and cooling, the system doesn't have that kind of momentum; for flying cars, it does. And that's enough to make the control problem quite a bit more complex.

Something like this happens in airplanes under human control, as well: when it does, it's called PIO for Pilot-Induced-Oscillation. A human pilot, of course, doesn't operate by slamming the controls back and forth to the stops; we try for a proportional response to what we observe. But PIO can still occur; the base reason for it is very different from "bang-bang syndrome." 

Enlightenment lies ahead in our next episode!