Saturday, February 20, 2021

Energy Mess in Texas

 So we were having dinner with some friends last night and the topic of blackouts in Texas came up. It quickly became clear that people had not just an ideological slant to what they believed, but that it was so bad that you wouldn't believe they were talking about the same situation. One one side, it was because the windmills froze. On the other, the windmills didn't matter because they were only 14% of the total, it was because Texans were gun-toting cowboys who had to do things their own way and refused to listen to the real experts.

So let's go to the EIA (the federal Energy Information Agency) and see what actually happened with energy generation in Texas:

The tan line across the top is natural gas-fired generation. Brown is coal, red is nuclear, and the green line is wind.

First and most obvious point: on 15 Feb every significant form of generation took a hit. Gas dropped 10 GWh, coal and nuclear dropped a notch, and wind went from varying between 5 and 10 to varying between 0 and 5.

That's just how ice storms work: they freeze windmills; they freeze up the big piles of coal that are the reserves at those plants; they freeze up valves in gas pipelines and even safety sensors at nuclear plants. Furthermore, a huge unexpected cold snap diverts natural gas for heating, and there simply wasn't enough of it.

Unexpected? Indeed. Less than a month before, the forecast for Texas had been unseasonably hot:

This was from the Weather Channel, but they were using NOAA Climate Prediction Center numbers.

What actually happened? When push came to shove, Texas generating capacity got reduced by the storm and maxed out at well less than demand. They had no excess capacity to fall back on. The deep question here is why not?

This essay by a retired electric utility planning engineer sheds some light on the subject. Most of the country has utility markets where people pay both for energy delivered and capacity. You have to pay for capacity because it costs money to build the extra plants, the extra pipelines and storage facilities, and so forth. No money, no extra capacity to use in emergencies such as ice storms.

Most of the country has markets for energy actually used but also for capacity. It's like paying for seatbelts and airbags in your car. You hope not to have to use them, but...

But in Texas, there is no capacity market. It's pay for actual delivered energy only. The main reason is politics; the wind generation sector in Texas, most famously T. Boone Pickens but plenty more, has huge clout there. Texas generates more electricity from wind than any other state, even California, and indeed more than almost any country.

But, and this is the crucial fact, wind has no excess capacity. Look at the graph again: from 14 to 15 Feb. wind power dropped from 10 GWh to 0. Not only did half the windmills freeze, but the wind stopped blowing. A capacity market in Texas would have made wind significantly more expensive, and was thus politically untenable.

So Texas went with the delivered energy only market, specifically to help wind. Which worked. And when times are good, is cheaper, like a car with no airbags. But when the ice came, Texas had neither belt nor suspenders--and got caught with its pants down.


Thursday, February 4, 2021

Powerpaste, the fuel of the future

 Following in the theme of my ammonia posts, here's a new idea in hydrogen carriers as fuel for hydrogen fuel cells. It's from the Fraunhofer Institute, who call it powerpaste. It's basically magnesium hydride formulated in paste form. Add water and you get more hydrogen than you started with, the rest coming from the water. 



Unlike ammonia, the residue doesn't vanish as air and water; it's basically milk of magnesia, and would need to be recycled in a power plant. But it has (according to them) ten times the energy density of a battery, so it's a viable automotive power source. So to refuel, you swap out cartridges.

Given that they are pushing it for mopeds, I'm guessing that the full-cycle efficiency isn't high, but one imagines that could be improved over time.


Wednesday, February 3, 2021

Feynman could have saved your life

 We had Covid vaccines since March 2020, and yet we sat around all year, letting the pandemic develop into a major catastrophe, waiting for nothing but the permission of evil bureaucrats to use it.

Well, not quite. There is a huge difference between discovering the recipe for something in the lab and producing enough of it to protect billions of people. As we are seeing, nearly a year later the limited production of the vaccine is still a major bottleneck to public-scale inoculation. 

Some idiot on Twitter seemed to think that the problem was evil drug companies hoarding their recipes and this could all be solved by having everybody join in and produce vaccines. As often happens, the resulting Twitstorm provoked someone who knew something about it to speak up. In this case, it was Derek Lowe, pharma industry expert and blogger for Science magazine. Here is his post Myths of Vaccine Manufacturing.

To sum up the key point of the blog, Lowe lists 5 major step in producing a vaccine of the new mRNA type:

  1. Produce the appropriate stretch of DNA
  2. Produce mRNA from your DNA 
  3. Produce the lipids that you need for the formulation. 
  4. Take your mRNA and your lipids and combine these into lipid nanoparticles (LNPs). 
  5. Combine the LNPs with the other components and fill vials.
  6. Get vials into trays, packages, boxes, trucks, etc.
To make a long story short (do read the original and the mostly informative comments if you are interested), the major bottleneck is step 4. As Lowe puts it, "Everyone is almost certainly having to use some sort of specially-built microfluidics device to get this to happen .... Microfluidics (a hot area of research for some years now) involves liquid flow through very small channels, allowing for precise mixing and timing on a very small scale.... My own guess as to what such a Vaccine Machine involves is a large number of very small reaction chambers, running in parallel, that have equally small and very precisely controlled flows of the mRNA and the various lipid components heading into them. You will have to control the flow rates, the concentrations, the temperature, and who knows what else, and you can be sure that the channel sizes and the size and shape of the mixing chambers are critical as well. These will be special-purpose bespoke machines, and if you ask other drug companies if they have one sitting around, the answer will be “Of course not”. This is not anything close to a traditional drug manufacturing process."

So I went and looked at what these microfluidics machines for producing lipid nanoparticles (LNPs) looked like. What's the scale, how close to nanotech do you have to be, etc, etc.

from ACS Omega 2018, 3, 5, 5044–5051
So here's a sketch of the kind of thing we are talking about, and a scale of the nanoparticles themselves, which tend to range from 20 to 100 nanometers in size.

Here's a closer look at the actual gadget:

Although it produces ~50 nm LNPs, the gadget itself is micro-scale, not nanoscale, technology. They are generally made using photolithography on the same kind of machines as computer chips. This is a tricky and complex process, with a lot of planning, work, and development between even a well-specified design and a usable product.

So what if we had a technology that could produce things like this right off the shop floor? If the scale were millimeters instead of microns, you could make one of these in an hour in your garage with a slab of MDF and a router. I could print one tenth of that scale on my 3D printer, and there are printers out there that could beat that by another factor of 10.

If we had full-fledged nanotech now, none of this would matter; after all a LNP is just an arrangement of atoms. But what struck me when I read about this bottleneck was forcibly to be reminded of this passage in Where is my Flying Car, Chapter 14:

Is it Worth Starting Now?

Surely, you will say, it would have been wonderful if back in 1960 people had taken Feynman seriously and really tried the Feynman path: we’d have the full-fledged paraphernalia of real, live molecular machinery now, with everything ranging from countertop replicators to cell-repair machines.

After all, it’s been 55 years. The 10 factor-of-4 scale reductions to make up the factor-of-a-million scale reduction from a meter-scale system with centimeter parts to a micron-scale system with 10-nanometer parts, could have been done at a leisurely 5 years per step—plenty of time to improve tolerances, do experiments, invent new techniques.

But now it’s too late. We have major investment and experimentation and development in nanotech of the bottom-up form. We have Drexler’s PNAS paper telling us that the way to molecular manufacturing is by protein design. We have a wide variety of new techniques with scanning probes to read and modify surfaces at the atomic level. We have DNA origami producing arbitrary patterns and even some 3-D shapes. We even have protein engineering producing the beginnings of usable objects, such as frames and boxes.

Surely by the time a Feynman Path, started now, could get to molecular scale, the existing efforts, including the pathways described in the Roadmap for Productive Nanosystems, would have succeeded, leaving us with a relatively useless millimeter-sized system with 10-micron sized parts?

No—as the old serials would put it, a thousand times no.

To begin with, a millimeter-sized system with 10-micron sized parts is far from useless. Imagine current-day MEMS but with the catalog and capabilities of a full machine shop, bearings that worked, sliders, powerful motors, robot arms and hands. The medical applications alone would be staggering. ...

That's where we should have been, at the very least. But of course we would still have to wait for permission.