Warning: include_once(../idn/idna_convert.class.php) [function.include-once]: failed to open stream: No such file or directory in /demo/index.php on line 9

Warning: include_once() [function.include]: Failed opening '../idn/idna_convert.class.php' for inclusion (include_path='.:/:/usr/local/php5/lib/pear') in /demo/index.php on line 9

Warning: ./cache is not writeable. Make sure you've set the correct relative or absolute path, and that the location is server-writable. in /simplepie.inc on line 1780
SimplePie: Demo

ongoing by Tim Bray

ongoing fragmented essay by Tim Bray

FaviconLong Links 1 Jun 2021, 9:00 pm

Welcome to the June 2021 issue of Long Links, in which I curate long-form works that I enjoyed last month. Even if you think all these look interesting, you probably don’t have time to read them assuming you have a job, which I don’t. My hope is that one or two will reward your attention.

Has an Old Soviet Mystery at Last Been Solved? — they’re talking about the Dyatlov Pass incident, which has provided fuel for mystery-lovers and conspiracy nuts for a half-century now. If you’ve not heard the Dyatlov story you might want to read this anyhow because it’s colorful and fearful. If you have, then you definitely want to dive into this one because I’m pretty well convinced they’ve figured it out.

Chipotle Is a Criminal Enterprise Built on Exploitation. Tl;dr: New York is suing Chipotle’s ass, looking for a half-billion dollars in penalties for wage theft. Even by the low standards of 21st-century capitalism, Chipotle seems like a terrible citizen of the world. Don’t eat there.

Why Did It Take So Long to Accept the Facts About Covid?. Among the many reasons Covid-19 is interesting (aside from “Will it kill me?”) is as a case study of how science accumulates data, draws conclusions, and communicates them. The specific story is the move from the spring-2020 narrative of “Wash your hands, masks are irrelevant” to 2021’s “Indoor aerosol-based transmission is dominant, so let’s worry about that.” The earlier narrative probably cost us huge numbers of human lives. Nobody suspects anyone of evil motives, but it’s clearly a problem worth thinking about when the official narrative is so slow to update. Masterfully told by Zeynep Tufekci, a sociologist who has become one of the best commentators on Covid public-health issues.

Although I grew up in the Middle East, I’m reluctant to write about it because there’s lots of atrocities to denounce but no good guys to praise. The people who wrote the following are more courageous than I am. The central controversy is, of course, over whether the “Two-state solution” is still possible and if not, what then? Everyone agrees on one thing: The current offical “peace process” is dead and rotting stinkily. The Old Israeli-Palestinian Conflict Is Dead — Long Live the Emerging Israeli-Palestinian Conflict is from Nathan J. Brown at the Carnegie Endowment for International Peace; it writes off two-states, acknowledges that one-state is unlikely too, and offers tentative ideas about ways forward.

A Liberal Zionist’s Move to the Left on the Israeli-Palestinian Conflict is about Peter Beinart, a long-time lion of intellectual Judaism. He is a rigorous thinker and that rigor has forced him into a two-states-is-dead position. Now he’s arguing for the Palestinian Right of Return; just thinking this probably puts him at grave risk of assassination. This is a big long piece and although I’ve watched the Mideast closely for decades, I felt I’d learned useful things.

Gorshem Gorenberg has for a long time one of my favorite Israeli voices; sentimental but clear-eyed and really smart. His latest big piece is Israelis and Palestinians can’t go on like this. Weep for us. It’s a profoundly pessimistic piece about how Israel got into its current mindset, which is very hard for people who don’t live there to understand. Such strong writing.

Let’s talk about some cheerful stuff, in particular about recent progress on the climate emergency. Everyone’s already written about Big Oil’s defeats in the courts and boardrooms. So here’s J.P. Morgan’s Energy Outlook.. It’s huge and I haven’t read all of it, but it feels to me like a nice comprehensive summary of the current state of play. The investment community, of course, is trying to figure out how to make money in a post-fossil-fuels world. I wish them the best of luck and if you’re one of them, you should read this.

Staying with the climate emergency, check out Separating Hype from Hydrogen – Part Two: The Demand Side. Anyone who cares about this stuff has to be wondering if a hypothetical Hydrogen Economy is a significant part of our path forward. The question is a little hard to answer because for some reason hydrogen has attracted a cohort of pitchman who want to tell you it’s the best solution for everything. A close clear-eyed look suggests that yes, there is a role for hydrogen, but it’s less important than the enthusiasts want you to think. The conclusions are helpfully pictured in this slide.

More good news from Germany; the courts are starting to kick ass. Germany’s more ambitious climate goals pressure industry to clean up has the details.

Let’s talk about my favorite nontechnical hobby, photography. Hmm, all these pieces are from DPReview. Let’s start with New York Times unveils prototype system aimed at inspiring confidence in photojournalism. I may have mentioned the Content Authenticity Initiative before. On the Internet we say “Pictures or it didn’t happen!” but we should be worrying about “Pictures and it didn’t happen!”. Because photos and video are way too easy to manipulate these days. The Initiative, whose key launch partner was Adobe if I’m reading the history right, tries to use digital signatures to establish a provenance chain from a photographer to the graphic you see on your screen. I’m delighted this is happening, and optimistic that this description will raise consciousnesses about what’s possible these days with modern security technology. No, blockchain is not involved.

Enthusiast photographers tend to obsess about lenses, and one of the standard lenses almost every such person loves is a fast 50mm prime lens, a “nifty fifty”. They make the people you’re taking pictures of look better and have also traditionally also had the virtues of being cheap and simple. No longer. Why are modern 50mm lenses so damned complicated? explains.

Finally, it’s all in the photographer’s wrist. The Best & Worst Ways To Hold Your Camera is a YouTube full of exciting wrist action.

Hey, let’s do politics. These days, my feelings are that occasionally laughing at US “conservatives” is essential therapy, otherwise you might do something crazy, albeit not as crazy as what they’re doing. The G.O.P. Won It All in Texas. Then It Turned on Itself has details. Your eyes will roll.

David Shor, a Democratic-party strategist and number-cruncher impresses me more with everything he produces. For example David Shor on Why Trump Was Good for the GOP and How Dems Can Win in 2022 is a long interview with him, to which I say “Wow”.

Only one science/engineering entry this month. I am delighted every time I discover some obvious part of the human experience for which science doesn’t have a good explanation. We can all use the humility. For example: No One Can Explain Why Planes Stay in the Air.

Stepping across the Pacific, here’s Tired of Running in Place, Young Chinese ‘Lie Down’. Now watch out, this is from Sixth Tone, which is out of Shanghai and thus indirectly an organ of China’s ethnofascist autocracy. Having said that, they regularly manage to be interesting.

Ending the Long Links on a musical note, let me recommend Brent Morrison's Rockin’ Blues Show; an Internet Radio show and exactly what it says. Everybody’s life can benefit from rockin’ blues. And now for something completely different: Lebanese Music From A Millionaires' Playground is a production from 1962, featuring Fairuz, Lebanon’s musical queen, who in writing this I discovered is still living. Her voice has always touched my heart. Finally, something to ease your troubled mind: Holly Bowling, live on a Colorado mountaintop. She’s a pianist with (to me) a Keith Jarrett influence (not a bad thing) whose music is mostly sourced from songs, by the Grateful Dead and Phish. From those songs as performed live, of course.

Hang in there, everyone.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconLP Victory 22 May 2021, 9:00 pm

Some readers may remember a February 2019 blog post describing how I inherited 900 or so used LPs, mostly classical. As of this week, I’ve listened to all of them (or rejected them out of hand), and kept 220 or so (counting methods were imperfect). Herewith lessons and reconfigurations. Also there’s a companion piece on sixteen albums I especially liked.

The ritual

First of all, I just want to say how much I’ve enjoyed this. It was a late-evening thing, usually the kids away from the big room, so just Lauren and me or sometimes just me. The pleasure comes in quanta of a 20-minute LP side.

I’ll miss it. So as I sat down to write this I grabbed randomly into the middle of the serried record ranks and came up with Fela Kuti’s Rofororo Fight. No recollection when or where I came by this, but it’s good.

LPs, really?!

Yes, really. I do not argue that vinyl is somehow better, in some absolute way, than digital. Nor do I believe that what pleases me is euphonious distortion. If you care about these issues, I wrote a principled defence of my analog habit.

How many?

It occurred to me that I should probably say how many records I now have. So I started counting but found it wearying so I asked Google “How many LPs per foot?” and the answer seems to be 70-ish. If that’s true I have about 600, a number most real collectors would scoff at. But on average they’re really good.

Why 600+ discards?

Because they were scratched, or had lousy sound that wasn’t redeemed by great music, or because the music was objectively bad (for example operetta or Ferrante & Teicher or several albums featuring the stylings of the house band at some resort in Bermuda), or I just didn’t like it. An example of the latter would be anything by Bartok — people I respect like him but I just don’t get it.

On the other hand, I started this project convinced that I hated everything by Debussy but I ended up keeping a few. And, as a Canadian and Bach lover I’m thus a Glenn Gould fan, but the guy who passed on this collection to me had more GG than any human should reasonably be asked to enjoy.

Too classical?

It would be reasonable to wonder if all this music by Dead White Euros is, well, a suspect use of listening time? To which I answer, in 2345, the music from 2021 that’s still being listened to will probably be the really, really good stuff. And I know for a fact it’ll be less white and male.

I’ll be honest: My soul has shrunk in the absence of loud electric guitar noise. Which these days I enjoy only alone in my car. But it’s not like being in the room; I suspect that on my first post-Covid rock-&-roll excursion I’ll cry like a baby.

Too old?

When I look at my Sixteen Classics, the music isn’t just classical, it’s pretty old stuff. Nothing from Stravinsky, any of his contemporaries, or anything later, with the exception of Mantovani’s awesome easy-listening offering, not exactly ground-breaking stuff. I plead guilty and defer to the taste of the gentleman whose records I inherited.

Having said that, I kept a couple of Schoenbergs and Weberns that I expected to hate but didn’t. Should write about that stuff.

Discogs

I had the feeling that I should track which records I was keeping so as a matter of principle I recorded them all at discogs.com, so now you can visit my collection, currently composed almost entirely of the tour-through-900 output. I sort of hate Discogs because the time I used it to try to buy music was such a disaster. But the database is by any sane measure a treasure. And they let me use it for my personal collection-management purposes for no money, so what’s not to like?

New record player

Long-time readers will know that for many years I rejoiced in the sounds produced by my Simon Yorke Series 9. But it failed under pressure. Its construction, while marvelous, offers no protection from toddlers or kittens or clumsiness, and also it’s very difficult to get a new cartridge mounted correctly.

I’d bought, and rejoiced in the sound of, the hand-built Dynavector DV-20X2 and managed to get it perfectly in place and sounding wonderful when household circumstances destroyed it. So, consumed by anger, I put the Simon Yorke in a box and bought a nice Rega Planar Six and stayed with the Dynavector cartridge.

I should probably write more about this choice but for now, suffice it to say that the Rega has a substantial plastic lid that folds down and protects the delicate parts from family mischief. On top of which it sounds excellent. Sometimes I feel I’ve lost a fractional point of quality at the margin, but I can live with it.

Except for, I should sell the Simon Yorke to someone who’d appreciate it.

Nice little hobby

Vinyl has become one. There’s record-store culture and there’s record-player culture and there’s a lot of good music out. You can do it at a reasonable price. Sure, Spotify or YouTube will eventually stream everything, but there’s something to be said for going out and hunting your own music. And there’s a lot to be said for sitting down in a quiet room and listening carefully.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconSixteen Classics 22 May 2021, 9:00 pm

I just finished writing about the process of, and lessons from, processing 900 inherited LPs into my collection. I thought it wouldn’t be fair to stay meta, so here is a handful of my favorites from among the new arrivals. Yes, they are all music by dead white men, performed by other dead white men (and a couple of women). Sorry. They were released between 1953 and 1978.

You can buy these online at discogs.com but I don’t recommend it because Discogs is sleazy, or on eBay, but I don’t recommend that either. Because why not go to an actual record store (most towns have ’em) and cruise the Classical section? (It’s usually pretty quiet.) Pick up some Rock & Roll and Country and Dubstep and Reggaeton and Dixieland and Bollywood while you’re there too. My favorite thing to do in a record store is leaf through the “new arrivals”. Which, granted, wouldn’t include any of these.

Trumpet trills!

Trumpet Concertos played by Adolf Scherbaum

Trumpet Concertos, performed by Adolf Scherbaum. Early stuff; the composers are Haydn, Leopold Mozart, Marc Antoine Charpentier, Alessandro Stradella. Giuseppe Torelli, Vivaldi, Telemann, Johann Christoph Graupner, Johann Friedrich Fasch.

None of these works are musical monuments, but Scherbaum digs into the big baroque lilt and offers the juiciest trills and ornamentation I’ve ever heard anywhere; impossible not to smile. Sound quality is adequate.

Date: 1965.

Verdi Overtures

Opera, but no singing

Verdi Overtures, with Abbado and the London Symphony. I enjoy live opera but for no reason I can explain just don’t listen to recordings. Which is probably unfair to Verdi, because on the evidence here, he wrote really superior music. Abbado and the Londoners really bring it; this is the kind of record that, if you’re reading a book or something it regularly grabs your attention away. Terrific sound too.

Date: 1978.

David Oistrakh’s 60th birthday jubilee concert

Oistrakh!

David Oistrakh’s 60th Birthday Jubilee Concert. Oistrakh plays the Violin Concerto with the Moscow State Philharmonic, and then conducts them in the 6th “Pathétique” symphony. Recorded live on two successive evenings in September 1968 by Melodiya, the old Soviet record label.

This is my favorite among the 900 LPs, and it’s not close. I can’t say which performance I like better. Oistrakh, to quote George Clinton, tears the muthafuckin roof off in the violin concerto, with fabulous tone and dynamics, and isn’t shy about using showy cheap tricks, which is fine by me. Also this is one of Peter Ilyich’s best works.

The Pathétique is more controlled, but it’s controlled ferocity; some of the sequences have an aesthetic that is straight out of heavy metal.

Tchaikovsky Op. 50, Beaux Arts

In case it’s not obvious, listening to this album is like being at a very fine rock concert by someone who’s on top of their game, knows it, and is determined to send the audience home with their heads reeling.

Date: 1968.

Beaux Artistes

I mean the Beaux Arts Trio, who performed for fifty years starting in 1955. Tchaikovsky Piano Trio in A Minor, Op. 50 is a really fine piece of chamber writing and the Artistes bring loads of dynamics, this explodes out of the speakers.

Beethoven Archduke Trio, Beaux Arts

Beethoven “Archduke” trio, Op. 97. Another fine performance of essential music. There’s a problem with chamber music; it can be more fun to play than to listen to, in particular because a lot of recordings are dominated by steely mid-string surface noise. These two just speak truth, the wiggles in the LP grooves really capture what those fine instruments, well-played, sound like.

Date: 1971, 1975.

Mantovani, More Golden Hits

Very easy listening

Mantovani and his Orchestra — More Golden Hits. I have a bit of a soft spot for “easy listening” music from the period of my childhood, I think because my Dad liked to play it sometimes. Now I own several inches of such LPs, of which this is the best by miles. One generally doesn’t expect musical depth; in this case, one would be wrong. To start with, Mantovani gets a string sound — perhaps with the help of studio magic? — that is ravishing, just astonishingly sweet. Also, the arrangements are fine, the playing polished, and there are nice tunes here, especially Stranger in Paradise.

Date: 1976.

Victoria de los Ángeles

Hail Victoria

I mean Victoria de los Ángeles (1923-2005), Spanish soprano, and A World Of Song. I dropped the needle on this one and thought “pretty scratchy, probably not keeping it” but then Ms dlÁ started singing and I sat back and shut up. As I said, I rarely-to-never listen to recorded opera singers, but this is just an explosion of passion and power. I wish I’d found a way to see her perform in her lifetime. Wow.

Date: 1965.

Festival of Russian Music

Russian Reiner

Marche Slave — A Festival of Russian Music. I don’t know the Russian for “chestnut" in the musical sense, but these are those. If you ever took a high-school music class and the teacher played you Famous Russian Tunes, you’ll probably find them here. They’re famous because they’re good. But let’s be honest, this one makes the list at least partly because it’s an audio showpiece. Turn it up as loud as you can without someone calling 911, and sink into the musical goo.

Date: 1960.

Heifetz Bruch and Mozart

Bruch/Mozart violin

Heifetz — Bruch Concerto in G Minor — Mozart Concerto in D Major. Fine music, fine fiddler, fine orchestra, great sound; this is just a treat for the ears. The 19-year-old Mozart said his own debut performance of the D Major "went like oil" and I can see why, this is extremely smooth music.

Heifetz charges through the Bruch, high drama, maximum dynamics, thundering orchestra, soaring violin. What’s not to like?

Kyung-Wha Chung plays Bruch

Wait, did I mention the B side of the record first? Yes, Heifetz’s assault on the Bruch is good fun. But the stack of inherited LPs also included Bruch / Kyung-Wha Chung Violin Concerto and Scottish Fantasia. She decided to maximize flow not drama and it’s really a much better take. Then on the Fantasia she and the orchestra reach back and let ’er rip, impossible to listen without smiling.

Date: 1963, 1972.

Bach and Vivaldi sonatas by George Malcolm and Julian bream

Julian & George

Julian Bream / George Malcolm ‎– Sonatas For Lute And Harpsichord. The sonatas are by Bach and Vivaldi and they’re great. Also, I like that the cover photo of Bream and Malcolm looks like they’re just back from several hours at the pub. Here we have two instruments neither of which have any sustain, so achieving legato is a challenge, which the boys laugh at; the slow movements are the best part. They take liberties with the arrangements, with a sure hand I think. The shift between Johann Sebastian’s music and Antonio’s is unsubtle, and left me appreciating each of them more.

Date: 1969.

Barenboim plays Mozart concertos

Tight Mozart

Daniel Barenboim, The English Chamber Orchestra - Mozart ‎– Piano Concerto No. 14 In E-Flat, K. 449; Piano Concerto No. 15 In B-Flat, K. 450. It’s hard to go wrong with Barenboim and Mozart; also the orchestra is excellent and the recording just fine. But the thing that grabbed me here was the absolutely exceptional ensemble, as though the soloist and orchestra were sharinga single brain. Reminds me of a rock concert I stage-managed one time, the band had been on the road for a long time and, standing listening, one of my stagehands said to me "They, so tight, they loose.” Yeah.

Date: 1968.

Peaceful arias by Albinoni, Handel, and others

Serenity

No, not the spaceship in Firefly, the feeling. L'Adagio D'Albinioni / Le Largo De Haendel Et Huit Aria Célébres provides it. Along with Albinoni and Handel, there are tracks by Bach, Haydn, Gluck, and Mozart; every single one is a gem. Lots of modern-life situations demand serenity and it’s good to know there’s a flat round piece of vinyl that has it to offer.

Also, wonderful sound, but you probably won’t notice that.

Date: 1976.

Silverman plays Rachmaninoff

Take it down

Rachmaninoff / Silverman ‎– Piano Sonata No. 1 In D Minor, Op. 28 / Etude-Tableau In C Minor, Op. 33 No. 3 Silverman is a Canadian piano player who brings a lot of feeling to what he plays and I’m a fan. This outing is moody and restrained; I’d describe the tone as “heavy” in a good way, as in fully laden. There’s no rushing and no theatrics, just a whole lot of very good playing that is entirely sure of where it’s going and makes you content to wait while it gets there.

Date: 1976.

Dorati/Hungarica Haydn symphonies Volume 7

Symphony boxes

Haydn — Philharmonia Hungarica, Antal Dorati — Symphonies 20 - 35. Dorati put together an orchestra of Hungarians who got through the Iron Curtain after the 1956 upraising and they released all of Haydn’s hundred-plus symphonies, in multi-LP box sets. I inherited three such boxes, have only listened to a sampling of the sides, but have I ever enjoyed them. It’s mind-boggling that a mere mortal could create this much music at a very, very high standard. The production gets out of the way, the sound is decent, and the playing is unflashy but then so are the symphonies, mostly. Awfully good stuff.

Date: 1973.

Heifetz plays Bach solo violin

Violin alone

Heifetz Plays Bach — Unaccompanied Sonatas And Partitas (complete). For many years I’ve cherished Gidon Kremer’s recording of this material, blogging at length about it in 2007. Heifetz isn’t as precise and arguably doesn’t dig as deep into this very deep music. But it’s impossible not to enjoy because Heifitz sounds like he’s enjoying playing it so much. Kremer’s instrument vanishes for me, I just hear the music. Listening to Heifitz I think “That’s some damn fine fiddling.” The fact that it’s in mono is irrelevant for a recording of a single instrument, the sound is just fine.

Date: 1953.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconIt’s Bad 16 May 2021, 9:00 pm

As we’ve slogged through Trump and carbon-loading and Covid, I’ve managed to remain more or less un-depressed and find enough gratifications in day-to-day life to keep the black dog of despair at a distance. But this morning I woke up early and, while the household slept, read most of the May 15th issue of The Economist. And folks, with a very few exceptions, it was pretty well all bad news. I feel an obligation to pass this summary along with a plea: As we relax slowly into post-Covid, maybe try to find a bit of extra energy and put it into politics or philanthropy or some other way to mitigate the awfulness. Our children deserve it.

Economist May 15th 2021 issue cover

I’ve highlighted the good news.

Lead articles

Vaccinating the World: The grim death estimates are undercounts. First doses into poorer arms should take priority but aren’t.

IsraPal: Bibi’s strategy of ignoring the Palestine issue can’t work.

Inflation: Might just be supply-chain constipation.

Corporate Tax Dodging: At unforgivable levels; there’s talk but no action.

USA

Next New York Mayor: The city’s started to shrink.

Cyberattacks; The bad guys are many and the defences laughable.

Homelessness: Awful and growing.

Opioids: Awful — in places worse than Covid — and not getting better.

Evictions: A million per year, no consensus on how to address.

Republicans: Lying liars present a deadly threat to democracy.

The Americas

Post-Covid: Latin-American economy in really rough shape.

Brazil: Starvation setting in, government evil and incompetent.

Costa Rica: Good-ish! No working street-address system but they’re trying to fix it.

Asia

Covid in South-East Asia: Looks like the next India.

Indonesia: Privatizing the vaccination process isn’t working.

Taiwan: Its indigenous people are having a tough time.

India: Covid’s hitting the poorest hardest. Grisly stuff.

The Koreas: North-south dialog LOL ha ha.

Maldives: Nasty-Islamist alert.

China

Sports: Corruption and oppression are bad for sports, especially soccer.

Hockey: That too.

Jobs: Xenophobia and autocracy are bad for international business, so young Chinese head for the civil service.

Middle East & Africa

IsraPal: Israel started this round, nobody knows how to stop it.

Saudi: MBS tries playing nice but isn’t good at it.

Lake Victoria: Fishery in big trouble, brutal remedies probably won’t help.

Somali Camels: Good news! More market transparency and women are getting in.

Nigeria: Post-oil corruption, war, crime, and poverty.

Europe

France: FN fascists looking good in the next elections.

Bulgaria: Prime minister’s a crook and an asshole but apparently on the way out.

Sweden: Good news! They’re building a successful green steel business in the north, something that could really move the carbon needle.

Turkey: Tourism’s in the toilet.

The Welfare State: Not terrible: The Euros are good at this and trying to use EU machinery to make things better.

Economic recovery: Good news! Seems EU public finance is less austerity-crazed.

Britain

Boris: Overentitled asswipe trying to do some stuff, probably bad.

Elections: Tories introducing voter ID but it doesn’t look (unlike the USA) like it’ll help them steal elections.

Labour: In big trouble in its blue-collar heartland but making up some middle-class ground.

Belfast: Yes, the British army did casually gun down Catholics fifty years ago. No, amnesties are not in order.

Scotland: Nicola and Boris gonna play referendum chicken for the foreseeable future.

Wales: Anger over English buying second homes in pretty places.

Pubs: Having trouble hiring but claim they can’t pay more.

The Labour Party: It’s all fucked up.

International

Algeria: Truth and Reconciliation thing over the Algerian War (of independence against France) looking dubious.

Business

SpaceX: Doing OK.

Mothers’ work experience: Bad. Used to be worse.

Harley-Davidson: In bad shape.

Eurolobbyists: 25K of them in Brussels. What’s that smell?

Pharma patents: It’s about rent-seeking.

CEO Pay: They used Covid to shovel more money to themselves.

Finance & Economics

US inflation: Nobody knows what’s going on.

Britain: In decline.

China’s population: Has probably peaked; bad for the business model.

David Swenson died: Made rich universities richer.

Corporate tax: Biden wants to do something about global sleaze, probably won’t work.

Corporate tax again: Traditional Economist sophistry against it.

The rest

I’m skipping Science & Technology & Books & Arts, but there’s a nice piece about Stacey Abrams, who turns out to be an accomplished novelist.

Don’t give up!

On every one of these issues, there is a sane path forward if people are willing to bring ethical passion to the table.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconTesting in the Twenties 15 May 2021, 9:00 pm

Grown-up software developers know perfectly well that testing is important. But — speaking here from experience — many aren’t doing enough. So I’m here to bang the testing drum, which our profession shouldn’t need to hear but apparently does.

This was provoked by two Twitter threads (here and here) from Justin Searls, from which a couple of quotes: “almost all the advice you hear about software testing is bad. It’s either bad on its face or it leads to bad outcomes or it distracts by focusing on the wrong thing (usually tools)” and “Nearly zero teams write expressive tests that establish clear boundaries, run quickly & reliably, and only fail for useful reasons. Focus on that instead.” [Note: Justin apparently is in the testing business.]

Twitter threads twist and fork and are hard to follow, so I’m going to reach in and reproduce a couple of image grabs from one branch.

Picture credited to Dodds Credited to Spotify

Let me put a stake in the ground: I think those misshapen blobs are seriously wrong in important ways.

My prejudices

I’ve been doing software for money since 1979 and while it’s perfectly possible that I’m wrong, it’s not for lack of experience. Having said that, almost all my meaningful work has been low-level infrastructural stuff: Parsers, message routers, data viz frameworks, Web crawlers, full-text search. So it’s possible that some of my findings are less true once you get out of the infrastructure space.

History

In the first twenty years of my programming life, say up till the turn of the millennium, there was shockingly little software testing in the mainstream. One result was, to quote Gerald Weinberg’s often-repeated crack, “If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.”

Back then it seemed that for any piece of software I wrote, after a couple of years I started hating it, because it became increasingly brittle and terrifying. Looking back in the rear-view, I’m thinking I was reacting to the experience, common with untested code, of small changes unexpectedly causing large breakages for reasons that are hard to understand.

Sometime in the first decade of this millennium, the needle moved. My perception is that the initial impetus came at least partly out of the Ruby community, accelerated by the rise of Rails. I started to hear the term “test-infected”, and I noticed that code submissions were apt to be coldly rejected if they weren’t accompanied by decent unit tests.

Others have told me they initially got test-infected by the conversation around Martin Fowler’s Refactoring book, originally from 1999, which made the point that you can’t really refactor untested code.

In particular I remember attending the Scottish Ruby Conference in 2010 and it seemed like more or less half the presentations were on testing best-practices and technology. I learned lessons there that I’m still using today.

I’m pretty convinced that the biggest single contributor to improved software in my lifetime wasn’t object-orientation or higher-level languages or functional programming or strong typing or MVC or anything else: It was the rise of testing culture.

What I believe

The way we do things now is better. In the builders-and-programmers metaphor, civilization need not fear woodpeckers.

For example: In my years at Google and AWS, we had outages and failures, but very very few of them were due to anything as simple as a software bug. Botched deployments, throttling misconfigurations, cert problems (OMG cert problems), DNS hiccups, an intern doing a load test with a Python script, malfunctioning canaries, there are lots of branches in that trail of tears. But usually not just a bug.

I can’t remember when precisely I became infected, but I can testify: Once you are, you’re never going to be comfortable in the presence of untested code.

Yes, you could use a public toilet and not wash your hands. Yes, you could eat spaghetti with your fingers. But responsible adults just don’t do those things. Nor do they ship untested code. And by the way, I no longer hate software that I’ve been working on for a while.

I became monotonically less tolerant of lousy testing with every year that went by. I blocked promotions, pulled rank, berated senior development managers, and was generally pig-headed. I can get away with this (mostly) without making enemies because I’m respectful and friendly and sympathetic. But not, on this issue, flexible.

So, here’s the hill I’ll die on (er, well, a range of foothills I guess):

  1. Unit tests are an essential investment in your software’s future.

  2. Test coverage data is useful and you should keep an eye on it.

  3. Untested legacy code bases can and should be improved incrementally

  4. Unit tests need to run very quickly with a single IDE key-combo, and it’s perfectly OK to run them every few seconds like a nervous tic.

  5. There’s no room for testing religions; do what works.

  6. Unit tests empower code reviewers.

  7. Integration tests are super important and super hard, particularly in a microservices context.

  8. Integration tests need to pass 100%, it’s not OK for there to be failures that are ignored.

  9. Integration tests need to run “fast enough“.

  10. It’s good for tests to include benchmarks.

Now I’ll expand on the claims in that list. Some of them need no further defense (e.g. “unit tests should run fast”) and will get none. But first…

Can you prove it works?

Um, nope. I’ve looked around for high-quality research on testing efficacy, and didn’t find much.

Which shouldn’t be surprising. You’d need to find two substantial teams doing nontrivial development tasks where there is rough-or-better equivalence in scale, structure, tooling, skill levels, and work practices — in everything but testing. Then you’d need to study productivity and quality over a decade or longer. As far as I know, nobody’s ever done this and frankly, I’m not holding my breath. So we’re left with anecdata, what Nero Wolfe called “Intelligence informed by experience.”

So let’s not kid ourselves that our software-testing tenets constitute scientific knowledge. But the world has other kinds of useful lessons, so let’s also not compromise on what our experience teaches us is right.

Unit tests matter now and later

When you’re creating a new feature and implementing a bunch of functions to do it, don’t kid yourself that you’re smart enough, in advance, to know which ones are going to be error-prone, which are going to be bottlenecks, and which ones are going to be hard for your successors to understand. Nobody is smart enough! So write tests for everything that’s not a one-line accessor.

In case it’s not obvious, the graphic above from Spotify that dismisses unit testing with the label “implementation detail” offends me. I smell Architecture Astronautics here, people who think all the work is getting the boxes and arrows right on the whiteboard, and are above dirtying their hands with semicolons and if statements. If your basic microservice code isn’t well-tested you’re building on sand.

Working in a well-unit-tested codebase gives developers courage. If a little behavior change would benefit from re-implementing an API or two you can be bold, can go ahead and do it. Because with good unit tests, if you screw up, you’ll find out fast.

And remember that code is read and updated way more often than it’s written. I personally think that writing good tests helps the developer during the first development pass and doesn’t slow them down. But I know, as well as I know anything about this vocation, that unit tests give a major productivity and pain-reduction boost to the many subsequent developers who will be learning and revising this code. That’s business value!

Exceptions

Where can we ease up on unit-test coverage? Back in 2012 I wrote about how testing UI code, and in particular mobile-UI code, is unreasonably hard, hard enough to probably not be a good investment in some cases.

Here’s another example, specific to the Java world, where in the presence of dependency-injection frameworks you have huge files with literally thousands of lines of config gibberish [*cough* Spring Boot *cough*] and life’s just too short.

A certain number of exception-handling scenarios are so far-fetched that you’d expect your data center to be in flames before they happen, at which point an IOException is going to be the least of your troubles. So maybe don’t obsess about those particular if err != nil clauses.

Coverage data

I’m not dogmatic about any particular codebase hitting any particular coverage number. But the data is useful and you should pay attention to it.

First of all, look for anomalies: Files that have noticeably low (or high) coverage numbers. Look for changes between check-ins.

And coverage data is more than just a percentage number. When I’m most of the way through some particular piece of programming, I like to do a test run with coverage on and then quickly glance at all the significant code chunks, looking at the green and red sidebars. Every time I do this I get surprises, usually in the form of some file where I thought my unit tests were clever but there are huge gaps in the coverage. This doesn’t just make me want to improve the testing, it teaches me something I didn’t know about how my code is reacting to inputs.

Having said that, there are software groups I respect immensely who have hard coverage requirements and stick to them. There’s one at AWS that actually has a 100%-coverage blocking check in their CI/CD pipeline. I’m not sure that’s reasonable, but these people are doing very low-level code on a crucial chunk of infrastructure where it’s maybe reasonable to be unreasonable. Also they’re smarter than me.

Legacy code coverage

I have never, and mean never, worked with a group that wasn’t dragging along weakly-tested legacy code. Even a testing maniac like me isn’t going to ask anyone to retro-fit high-coverage unit testing onto that stinky stuff.

Here’s a policy I’ve seen applied successfully; It has two parts: First, when you make any significant change to a function that doesn’t have unit tests, write them. Second, no check-in is allowed to make the coverage numbers go down.

This works out well because, when you’re working with a big old code-base, updates don’t usually scatter uniformly around it; there are hot spots where useful behavior clusters. So if you apply this policy, the code’s “hot zone” will organically grow pretty good test coverage while the rest, which probably hasn’t been touched or looked at for years, is ignored, and that’s OK.

No religion

Testing should be an ultimately-pragmatic activity with no room for ideology.

Please don’t come at me with pedantic arm-waving about mocks vs stubs vs fakes; nobody cares. On a related subject, when I discovered that lots of people were using DynamoDB Local in their unit tests for code that runs against DynamoDB, I was shocked. But hey, it works, it’s fast, and it’s a lot less hassle than either writing yet another mock or setting up a linkage to the actual cloud service. Don’t be dogmatic!

Then there’s the TDD/BDD faith. Sometimes, for some people, it works fine. More power to ’em. It almost never works for me in a pure form, because my coding style tends to be chaotic in the early stages, I keep refactoring and refactoring the functions all the time. If I knew what I wanted them to do before I started writing them, then TDD might make sense. On the other hand, when I’ve got what I think is a reasonable set of methods sketched in and I’m writing tests for the basic code, I’ll charge ahead and write more for stuff that’s not there yet. Which doesn’t qualify me for a membership of the church of TDD but I don’t care.

Here’s another religion: Java doesn’t make it easy to unit-test private methods. Java is wrong. Some people claim you shouldn’t want to test those methods because they’re not part of the class contract. Those people are wrong. It is perfectly reasonable to compromise encapsulation and make a method non-private just to facilitate testing. Or to write an API to take an interface rather than a class object for the same reason.

When you’re running a bunch of tests against a complicated API, it’s tempting to write a runTest() helper that puts the arguments in the right shape and runs standardized checks against the results. If you don’t do this, you end up with a lot of repetitive cut-n-pasted code.

There’s room for argument here, none for dogma. I’m usually vaguely against doing this. Because when I change something and a unit test I’ve never seen before fails, I don’t want to have to go understand a bunch of helper routines before I can figure out what happened.

Anyhow, if your engineers are producing code with effective tests, don’t be giving them any static about how it got that way.

The reviewer’s friend

Once I got a call out of the blue from a Very Important Person saying “Tim, I need a favor. The [REDACTED] group is spinning their wheels, they’re all fucked up. Can you have a look and see if you can help them?” So I went over and introduced myself and we talked about the problems they were facing, which were tough.

Then I got them to show me the codebase and I pulled up a few review requests. The first few I looked at had no unit tests but did have notes saying “Unit tests to come later.” I walked into their team room and said “People, we need to have a talk right now.”

[Pause for a spoiler alert: The unit tests never come along later.]

Here’s the point: The object of code reviewing is not correctness-checking. A reviewer is entitled to assume that the code works. The reviewer should be checking for O(N3) bottlenecks, readability problems, klunky function arguments, shaky error-handling, and so on. It’s not fair to ask a reviewer to think about that stuff if you don’t have enough tests to demonstrate your code’s basic correctness.

And it goes further. When I’m reviewing, it’s regularly the case that I have trouble figuring out what the hell the developer is trying to accomplish in some chunk of code or another. Maybe it’s appropriate to put in a review comment about readability? But first, I flip to the unit test and see what it’s doing, because sometimes that makes it obvious what the dev thought the function was for. This also works for subsequent devs who have to modify the code.

Integration testing

The people who made the pictures up above all seem to think it’s important. They’re right, of course. I’m not sure the difference between “integration” and “end-to-end” matters, though.

The problem is that moving from monoliths to microservices, which makes these tests more important, also makes them harder to build. Which is another good reason to stick with a nice simple monolith if you can. No, I’m not kidding.

Which in turn means you have to be sure to budget time, including design and maintenance time, for your integration testing. (Unit testing is just part of the basic coding budget.)

Complete and fast

I know I find these hard to write and I know I’m not alone because I’ve worked with otherwise-excellent teams who have crappy integration tests.

One way they’re bad is that they take hours to run. This is hardly controversial enough to worth saying but, since it’s a target that’s often missed, let’s say it: Integration tests don’t need to be as quick as unit tests but they do need to be fast enough that it’s reasonable to run them every time you go to the bathroom or for coffee, or get interrupted by a chat window. Which, once again, is hard to achieve.

Finally, time after time I see integration-test logs show failures and some dev says “oh yeah, those particular tests are flaky, they just fail sometimes.” For some reason they think this is OK. Either the tests exercise something that might fail in production, in which case you should treat failures as blockers, or they don’t, in which case you should take them out of the damn test suite which will then run faster.

Benchmarks

Since I’ve almost always worked on super-performance-sensitive code, I often end up writing benchmarks, and after a while I got into the habit of leaving a few of them live in the test suite. Because I’ve observed more than a few outages caused by a performance regression, something as dumb as a config tweak pushing TLS compute out of hardware and into Java bytecodes. You’d really rather catch that kind of thing before you push.

Tooling

There’s plenty. It’s good enough. Have your team agree on which they’re going to use and become expert in it. Then don’t blame tools for your shortcomings.

Where we stand

The news is I think mostly good, because most sane organizations are starting to exhibit pretty good testing discipline, especially on server-side code. And like I said, this old guy sees a lot less bugs in production code than there used to be.

And every team has to wrestle with those awful old stagnant pools of untested legacy. Suck it up; dealing with that is just part of the job. Anyhow, you probably wrote some of it.

But here and there every day, teams lose their way and start skipping the hand-wash after the toilet visit. Don’t. And don’t ship untested code.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconHydrofoiling 10 May 2021, 9:00 pm

I was standing on a floating dock on a Pacific-ocean inlet, a place where it’s obvious why motorboats are such an environmental disaster. Fortunately lots of other people have noticed too and it looks increasingly that more people will be able to enjoy Messing About In Boats without feeling like they’re making Greta Thunberg justifiably angry at them.

Hm, this piece got kind of long. Spoiler: I’m here mostly to talk about electric hydrofoil boats from Candela, in Sweden, which look like a genuinely new thing in the world. But I do spend a lot of words setting up (separately) the electric-boat and hydrofoil propositions, which you can skip over if you’d like.

Background

Last year I wrote about personal decarbonization, noting that our family works reasonably hard at reducing our carbon load, with the exception being that we still have a power boat that burns an absurd amount of fossil fuel for each km of the ocean that you cross.

Electric boats

Separately, back in 2019 I wrote a little survey of electric-boat options; my conclusion was that it shouldn’t be too long before they’re attractive for the “tugboat” flavor of pleasure boats. Which is perfectly OK unless you want to cover some distance and get there fast.

There’s been quite a bit of action in the electric-boat space. Here are some companies: VisionMarine Technologies (formerly the Canadian Electric Boat Company), Rand Boats, Greenline Yachts, Zin Boats, and then here’s Electrek’s electric-boats articles. If you like boats at all there’ll be pictures and videos in that collection that you’ll enjoy.

A lot of these are interesting, but (near as I can tell) none of them come close to combining the comfort, capacity, speed, and range that my family’s current Jeanneau NC 795 does.

But in the wake of Tesla, enough people are getting interested in this problem that it’s a space to watch.

That float

Here’s a picture of the float (and the boat). I was sitting down there on a lawn-chair enjoying the view, good company, and an adult beverage.

Floating dock on Howe Sound

What I noticed was that every time a motorboat went by, the wake really shook us up on the float. That float is a big honking chunk of wood, something like ten by twenty feet, and those powerboat wakes toss it around like a toy.

Which is to say, moving a boat of any size along at any speed throws a whole lot of water around. My impression is that most of the energy emitted by the burning fossil fuel goes into pushing water sideways, rather than fiberglass forward.

Let me quote from that Electric Boats piece: “There are two classes of recreational motorboat: Those that go fast by planing, and those that cruise at hull speed, which is much slower and smoother.”

Both of those processes involve pushing a lot of water sideways. And it turns out that my assertion is just wrong because there’s a third approach: the Hydrofoil. It’s not exactly a new idea either, the first patent was filed in 1869, and while they’re not common, you do see them around. In fact, here’s a picture of one I took in Hong Kong Harbour, of the fast ferry that takes high-rollers over to the casinos in Macau.

1994 picture of the fast hydrofoil Hong Kong-Macau ferry

If it looks a little odd that’s because I took it in 1994, with a film camera.

Hydrofoil sailing boats are a thing, including the AC75 design that was used in the most recent America’s Cup, probably the world’s highest-profile sailing race. These things are fast, coming close to 100 km/h. I strongly recommend this video report.

Then there’s the Vestas Sailrocket 2, which currently holds the world record for the highest sustained speed over 500m: 66.45 knots, which is 121.21 km/h. There’s a nice write-up in Wired.

Also check out the Persico 69F, something like the A75 only anyone can buy one. Here’s a story in Sailing World that has a lot of flavor.

Hydrofoil + electrons = Candela

I guess it’s pretty obvious where I’ve been heading: What about an electric hydrofoil motorboat? It turns out that there’s an outfit called Candela which is building exactly such a thing near Stockholm; their first boat is called the C-7. I reached out and got into an email dialog with Alexander Sifvert, their Chief Revenue Officer. It’s remarkably easy to strike up such a conversation when you open with “I’m a boater and an environmentalist and well-off.”

The idea is that you make an extremely light hull out of carbon fibre, give it an electrical drive-train, and put it on software-controlled hydrofoils that lift the hull out of the water and adjust continuously to waves and currents, to give you a smooth, silent, fast, ride. They claim that the foils experience only a quarter of the water drag compared to any conventional hull going fast — the typical cruising speed is 20kt or 37 km/h. So with a 40kWh battery pack like that in a BMW i3, they get 90+km of range at cruising speed.

Obviously there’s vastly less wake, and once you’re out at sea the operating cost is rounding error compared to what you pay to fill up the average motorboat’s huge fuel tank. Also, experience from the electric-car world suggests that the service costs on a setup like this should be a lot less over the years.

All this on top of the fact that your carbon load is slashed dramatically, more if your local electricity is green but still a lot. I also appreciate the low wake and silence — at our cabin, overpowered marine engines going too fast too close to shore are a real quality-of-life issue.

The Candela website is super-ultra-glossy and not terribly information-dense. Soak up some of the eye-candy then jump to the FAQ.

But there are terrific videos. The one that most spoke to me was this, of a C-7 and a conventional motorboat running side by side in rough water, which we experience a lot of up here in the Pacific Northwest. Here we have a famous powerboat racer (yes, there are such things) taking a C-7 out for a spin. And here we have a C-7 cruising along beside one of those insanely-fast hydrofoil sailboats from Persico.

I’m seriously wondering if this is the future of recreational motorboating.

Would I get one?

Well, it’s a good thing the C-7 doesn’t use much fuel, because it’s really freaking expensive, like three times the price of my nice little Jeanneau. Which should not be surprising in a product that is built out of carbon fibre, by hand, “serially” they say, in a nice neighborhood near Stockholm.

In any case, I wouldn’t buy the C-7 model because it’s built for joyriding and sunbathing, and we use our boat for commuting to the cottage and as Tim’s office. Mr Sifvert agreed, suggested they might have something more suitable in the pipeline and wondered if I’d like to sign an NDA (I declined, no point hearing about something I can’t buy or write about right now). Separately, I note that they’re also working on a municipal-ferry product.

I’ll take a close look when the product Mr Sifvert hinted at comes out. But at that price point, I’ll need to try it out — I gather Candela is happy to give you a sea trial if you’re willing to visit Stockholm. Which actually doesn’t sound terrible.

Independent testimony would be nice too. Which is a problem; as I noted in my Jeanneau review (while explaining why I posted it), boats are a small market with a tight-knit community and truly impartial product evaluations by experts are hard to come by.

Futures

The core ideas behind the Candela C-7 aren’t that complicated: Electric powertrain, battery, hydrofoils, software trim control. Because of electric cars there’s a lot of battery and electrical-engine expertise about. Hydrofoils aren’t a new technology either. So getting the software right is maybe the crucial secret sauce?

Thing is, as a citizen of the planet, I’d like to see the roads full of electric cars — this is definitely going to happen — and the waterways full of boats that look like what Candela is building. Which won’t happen at the current price point, but I’m not convinced that can’t be driven down to what the boating world sees as a “mass-market” level. I’ve no way to know whether that will happen. I sure hope so.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconMultimodality 5 May 2021, 9:00 pm

My Wednesday consisted mostly of running around and moving things. I used five transport modes and now I can’t not think about environmental impact and practicality and urban futures. Hop on board the car-share, boat, electric car, bus, and bike, and come along for the ride.

Background

What happened was, on Wednesday morning our boat was at the boatyard for a minor repair. They’d squeezed me in on the condition that I show up at 8AM to sail it away — they’re pretty well maxed out this time of year when everyone wants to get back on the water. Simultaneously, following on my recent bike accident, my bike was stuck in the repair shop — also maxed out in spring — because the replacement pedals they’d ordered were defective and the nearest available pair were in an indie bike shop in a distant suburb.

So, here’s what I did.

  1. Got up early and took a car-share which was just outside my house down to Granville Island, where the boatyard is.

  2. Putt-putted the boat over to my spot at the marina, hung out there for a while while I cleaned up the boat, put it into office mode, and did a bit of work.

  3. Walked twenty minutes to the nearest car-share and took that home.

  4. Took the electric car 32km from central Vancouver to Cap’s South Shore Cycle in Delta, picked up the necessary bike part, came back to Vancouver to drop it off at the local bike repair that was working on my bike, then went home for lunch.

  5. When the shop texted me that they were done, took a handy local bus eight stops or so starting four blocks from my house and getting off around the corner from the bike shop.

  6. Biked home!

Now let’s consider the experience and the economics.

Car Share

There’s been a lot of churn in this market over the years, with Zipcar and Car2Go and so on; it may or may not be a thing where you are. Here in Vancouver it’s alive and well under the Evo brand. You pick up the one that’s nearest and drop it off at your destination. There’s a mobile app, obviously.

An Evo car-share

They’re all Toyota Prii and thus pretty energy-efficient. Also, since a significant part of an automobile’s carbon load comes from manufacturing, when you provide more trips with fewer cars, that should be a win.

Having said that, the carbon-load impact story is mixed. The reason is that they’re so damn convenient that they end up replacing, not solo car trips, but public-transit and bike trips. I can testify to that; when I was working downtown at Amazon, mornings when the weather was crappy or I wasn’t feeling 100%, I regularly yielded to the temptation to car-share rather than bike or take the train. On the other hand, we have friends who don’t own a car but probably would were it not for car-share availability.

Anyhow, it’s beastly complicated. Does car sharing reduce greenhouse gas emissions? Assessing the modal shift and lifetime shift rebound effects from a life cycle perspective dives deep on this. Big questions: What kind of trips is the car-share displacing, and do the vehicles wear out faster than individually-owned cars? The image is from that paper linked above.

Transport modes emission factors

Emission factors for three car-share scenarios. The Y access is equivalent grams of CO2 per passenger-kilometre travelled.

My big gripe with the study is that in my experience, the kind of car offered by car-shares is quite a bit more efficient than the average proprietary car on the road. I’m also puzzled at the low carbon cost assigned to manufacturing; I had thought it closer to 50% from previous readings. No matter how you cut it, though, it’s not simple.

Boating

OMG forgive me, Mother Earth, for I have sinned. When we’re trundling along at 20+ knots to the cabin with weekend supplies, we get 1 km/litre, maybe 1.1. (It’s only 30km.) On the other hand, there’s this.

False Creek from a small boat

Shot while heading over to the boatyard a couple of days ago. It’s awfully nice to be out in a boat.

I console myself with the fact that this is the only petroleum-burning object in our possession; even our house heating and cooking is electric now. But still. There is hope; innovators all over the world are trying to figure out how to make boating less egregiously wasteful. For example, consider the lovely Candela boats, which combine modern battery technology with hydrofoils to get all that fiberglass up out of the water. They’re not the only ones; other manufacturers, mostly in Europe, are trying to get the right combination of range, performance, and carrying capacity. I sincerely hope to be able to buy such a thing in the boating years that remain to me.

But in my actual boat in the year 2021, it’s not good.

Driving (e-car)

I took another car-share Prius home (a long walk to get it, this time) and had time for a coffee before I fired up the electric car to go get bike parts.

Jaguar I-Pace in the rain

I confess to having been all excited about this trip; it’s been a year or more since I’ve had the Jag out on the highway. Traffic was light and I may have driven a little too fast; actually cried out with joy as I vectored into the approaches to one of the Annacis Island bridges. The contrast to the friendly-but-frumpy little Prius is stark.

Let’s look at another graph, from Which form of transport has the smallest carbon footprint?.

carbon footprint for various travel modes

I encourage visiting the paper because this graphic is interactive, you can add lots more different travel modes.

Where I live the electricity is very green, but even with dirty coal-based power, battery cars are still way more carbon-efficient than the gas-driven flavor simply because turning fossil fuels into forward motion is so beastly inefficient. Still, considering manufacturing carbon cost, a single human in a heavy metal box is never going to be a really great choice, environmentally.

But wait: This is exactly the kind of errand cars exist for. There’s no good argument for decent public-transit service between residential Vancouver and a small store in a strip mall in a remote suburb. It’s too far to bike. The advantages of car-share, as we’ve seen, aren’t that overwhelming.

I once read a piece of analysis — sorry, can’t remember where — that suggested a future where lots of people still have cars, but that they are rarely used, mostly just sit there. Until, for example, you have to head off to fetch something from a distant suburb. That sounds plausible to me, partly because it describes our situation: We regularly get plaintive complaints from the Jag’s remote-control app saying that since we haven’t gone near it in days, it’s going into deep-sleep mode.

Busing

I generally don’t mind taking public transit, if only for the people-watching, except when there’s rush-hour compression.

Unfortunately, the carbon economics depend really a lot on how full your buses and trains are. To the extent that fossil-fuel shills have written deeply dishonest studies that I’m not gonna link to arguing that cars are more planet-friendly than buses unless the buses are really full all the time. In fact, in most places they’re full enough to make the carbon arithmetic come out ahead on carbon loading.

And there’s another subtle point: A successful public-transit system has to run some trains and buses at suboptimal loads because otherwise people won’t be able to depend on it to get around and will just go ahead and buy a car and then start driving everywhere.

And having said that, Covid is definitely not not helping; check out this picture I took on the bus today.

On Vancouver’s #3 bus, May 5, 2021

Vancouver’s #3 Main Street bus on a Wednesday afternoon in May 2021.

Biking

Anyhow, I finally, weeks after my accident, found myself back on my wonderful e-bike, heading home.

Trek e-bike at Vancouver’s False Creek

So far on this journey there’s been a whole lot of “It’s complicated.” No longer. Bikes’ carbon load is vanishingly small compared to any other way of traveling further than you want to walk. Plus they’re fun. Plus riding them is good for you. E-bikes hugely expand the range of situations where biking is an option for a non-young non-athletic person.

It’s not complicated. We need more bikes generally and more e-bikes specifically on the road, which means your city needs to invest in bike-lane infrastructure to make people safe and comfortable on two wheels.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconAmazon Q1 2021 2 May 2021, 9:00 pm

Amazon announced their financial results for the January to March quarter last Thursday. I was reading them when an email popped up asking if I wanted to talk about them on CNBC Squawk, which I did. In preparation, I re-read the report and pulled together a few talking points; here they are.

Top line

Amazon’s gross revenue increased 41% year-over-year, to $108.5B. To use a technical term: HOLY SHIT! This sort of growth on that sort of base is mind-boggling. Granted, shoppers suffering from Covid coop-up played a substantial role. But still, at some point you have to run into the Law of Big Numbers.

Branching out

But maybe not. One way to increase revenue is to enter more markets, and does Amazon ever do that. The quarterly summary mentioned online pharmacy, NFL merch, “Amazon One” payment tech, “Amazon Business” global procurement system with 5M customers and a $25B top-line, Prime Video’s partnership with the New York Yankees’ media empire, wireless earbuds, and video doorbells. Is there any business sector Amazon is not charging into?

Which brings us to…

AWS

Revenue moved up to a $54B annual run rate, the highlight being 30%-ish growth with 30%-ish operating margin, generating 47% of Amazon’s overall profit. I’d use another technical term but you get the idea.

AWS benefits less from Covid than the rest of the company, I guess; but at this revenue level, both those 30% numbers are pretty astonishing. People wonder why Andy Jassy got the CEO job, but these numbers are all you need to know.

There’s lots more room for growth, too. AWS is like $54B/year these days and my guess is that’s probably around a third of global public-cloud spend. Global Enterprise IT spend is a much, much, bigger number, so there’s no risk of the Cloudistas hitting the limits of their addressable market any time soon.

On the other hand, as I’ve said before, there’s a major potential headwind here. Anyone who’s spending say $1M/month with AWS has to be thinking that they’re sending $3.6M/year to Amazon’s bottom line, and that’s not a very comfortable feeling if Amazon is competing with you, which it probably is, and if it isn’t, apparently will be next year.

If I were an Amazon shareholder (I’m not) or executive, I’d seriously look at doing that AWS spin-off while they can do it the way they want, not the way Washington DC, which seems unusually flush with antitrust energy, says, while pointing a gun at them.

Climate emergency

Moved up the carbon-neutral date from 2030 to 2025, yay! Journalists: Keep a close eye on them for evidence of dodgy carbon accounting, which is not exactly rare in the business community. If they manage to pull this off, that’s a titanic achievement and I hope they share lots of details, because Amazon is a reasonably typical enterprise, just bigger; plenty to learn here.

The other interesting thing is the Climate Pledge, which now has over 100 signatories including a whole bunch of famous names. This may be significant; when Amazon announced the Pledge, I was too polite to say that it looked like it’d been hastily cooked up and there weren’t many, as in any, other recognizable names attached. The Pledge’s zero-carbon date is 2040, which is way too late, but still, let’s hope this turns into a good thing for the world.

People issues

In Jeff’s goodbye letter he said they were adding top-level corporate goals, and they have. The new text: Amazon strives to be Earth’s Most Customer-Centric Company, Earth’s Best Employer, and Earth’s Safest Place to Work. The last two out of three didn’t used to be there. And I place a whole lot of weight on the last one, about safety. It didn’t get that much public traction, but I thought the most damaging Amazon reportage of the last couple of years was the series of stories about elevated injury rates at Amazon warehouses. Credit is due to Will Evans and the people at RevealNews.

I really sincerely hope that Amazon can bend this particular curve in a better direction.

But while that’s important, there’s a way more important “people” issue. When I got on CNBC, the host (in a segment omitted from the excerpt above) asked me about the unionization story and Amazon announcing a general warehouse-worker raise and so on. He asked “So, are they doing enough?”

No, obviously.

The developed world’s egregious inequality curve, which has been getting monotonically worse since the Seventies, is not going to be addressed by another buck an hour to powerless entry-level workers. Anyhow, it’s not an Amazon problem, Amazon is a symptom. And if you have to take a disempowered entry-level job, there are way worse places to go that Amazon.

I don’t claim to understand American Politics, but I do note that both Bernie Sanders and Josh Hawley are mad at big tech in general and Amazon in particular. But it seems obvious to me that, going forward, a central issue for business leaders — maybe the central issue — will be dealing with political pressure to redress society’s imbalances.

Once again, if I were in leadership, I’d be working on getting out in front on this stuff while I still have the reins in my hands.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconLong Links 1 May 2021, 9:00 pm

Welcome once again to Long Links, a monthly curation of long-form pieces that pleased and educated me and that being semi-retired gives me time to enjoy; offered in the hope that one or two might enrich the lives of busier people.

It’s simple (and accurate) to say that Xi Jinping’s regime is barbaric, obscurantist, and brutal, and that the people of China are badly mis-governed. That given, the place is interesting; I for one am continuously astonished that they manage to hold it all together. Maria Repnikova on How China Tells its Story dives deep on this; on the very-grey line between what can and can’t be said, what Chinese propagandists think it is they’re trying to do, and (especially) the differences between the Chinese and Russian approaches to storytelling.

Scale Was the God That Failed is by Josh Marshall, founder and editor of Talking Points Memo, a rare example of an all-digital publication that grew out of a blog and has become a sustainable business. It’s not actually long-form measured by number of words, but contains more intellectual meat on the subject of Internet Publishing than you can find in a dozen typical discourses on the subject. Read this if you want to learn why Internet advertising is so awful, why your local paper went out of business, and why it’s not being replaced by high-profile Web publishers, who these days are mostly in the news for yet another round of journalist layoffs. Here are a couple of sound-bites. [Historically, newspapers and TVs and radio stations enjoyed local monopolies, thus…] “almost all of the elements of good, newspaper journalism—big newsrooms paying middle-class salaries and giving reporters the time to get the story right—were made possible by those monopolies.” Also: “The chronic oversupply of publications chasing a fixed number of ad dollars has required publishers to continually charge less for ads that demand more of readers.” Anyhow, investment-driven big-journo launches have pretty well all failed. Marshall offers several examples of things that have worked, including his own publication. But there’s still a crisis in journalism, which is really bad for democracy.

The Healing Power of Javascript, by well-known writer, photographer, and techie Craig Mod, is probably only only for geeks. It’s about the way that coding can be a morale-booster, particularly in depressive plague times. I found it heart-warming. Consider this: “With static sites, we've come full circle, like exhausted poets who have travelled the world trying every form of poetry and realizing that the haiku is enough to see most of us through our tragedies.”

Regular readers will know that I’m an audiophile; I used to really enjoy advising people on buying and setting up good-sounding stereo gear, but that seems to happen less these days. Still, judging by the intense traffic at my local record store, there are a lot of people listening to vinyl. I’m not a fanatic, listen mostly to digital music, but still find the experience of listening to vinyl entrancing. I really liked How to Buy the Best Record Player and Stereo System for Any Budget in Pitchfork. I haven’t heard a lot of the stuff they recommend, but here’s the advice I’ve often given to non-obsessive people who want good sound at a fair price: Decide how much you want to spend. Go buy speakers from PSB, amplification from NAD, and a record player from Rega that fit in your budget; they all have large product lines with entries at all the sane price-points. You’ll be happy. (But don’t buy a Rega cartridge.) Having said that, Marc Hogan, who wrote the Pitchfork piece, probably has fresher data.

Moving from Pitchfork to discogs.com, here are the 50 Most Popular Live Albums of All Time and The 200 Best Albums of the 2010s. I like the methodology; Discogs has as much claim to know what music people like as any organization in the world. As an audiophile and music lover, I have a soft spot for live recordings; a band on stage has more adrenaline in its veins than any studio can generate, and there’s less production interference between the music and you. Of the 50 mentioned here, I own 15. As for the Music Of The Teens, I have only seven. Because my taste these days has become less mainstream. Because what, in this century, are “albums”, anyhow? And because I’m an out-of-touch old fart.

Here’s another one that’s probably geek-only: The TEX tuneup of 2021, by Donald Knuth. TEX, these days, is pretty well only used for scientific publishing, and in just a few science neighborhoods. But it’s wonderful that a piece of software that’s so old and, to be honest, so old-fashioned, is still lovingly maintained. Knuth thinks this may be the last refresh.

This is only 3:54 long and not really a video, just panning and zooming around a photo of the moon. But, wow. The photo is taken with a Leica APO-Telyt-R 2.8/400mm, of which apparently only 390 were ever made, starting back in the Nineties. As I write, you can get one on eBay for $13,999. Might even be worth it.

Speaking of fine photographs captured with unconventional technology, take a tour through Interview: David 'Dee' Delgado’s love letter to New York City, shot on 4x5 film. What beautiful colors. To quote Delgado, self-described as a Puerto Rican independent photographer based in New York City: “This whole project was shot on a Toyo 45A which is a 4x5 field camera. It’s not a light camera. It’s a heavy camera. Lugging that camera along with the film, film holders, and a dark cloth is not an easy task. Shooting 4x5 is a lot slower, especially when you are using a field camera. It slows things down and lets you connect…” It sure connected with me.

Now let’s enter dangerous territory, with The Sexual Identity That Emerged on TikTok. On the surface, it’s about what might or might not be a segment of the gender/sexual spectrum: “super-straight”. On the other hand, it might be ignorant hatefulness. Whatever you may think on that issue, I found Conor Friedersdorf’s exploration of the conversation fascinating. As if discussing the issues aren’t difficult enough, he broadens his scope to include a broader survey of Internet conversation, and stays humane in the face of inhumanity. Anyhow, educational and thought-provoking. Even if you’re outraged.

The Ever Given is still stuck (legally, now) in the Suez Canal, but it’s not news any more. I think it still has stories to tell, though, and offer in evidence Gargantuanisation by John Lanchester in the LRB. [That’s a link to a Google search because the LRB URL is flaky and doesn’t seem to support direct linking.] Lanchester is a very good writer and has first-person experience with problems in the Canal. He has plenty to teach about what the Ever Given means and what trends it exemplifies. The story starts with a question: Have you ever thought about what ship might have brought whatever’s in the package that Amazon just delivered to you across whatever ocean needed crossing? Does anyone? This is a large and pretty well invisible segment of the economy. It’ll be less invisible if you read this.

These days I hear less talk about God from politicians, even in America, where such talk used to be not only commonplace but nearly compulsory. It’s not hard to figure out why; Americans are less religious. 538 does the numbers in It’s Not Just Young White Liberals Who Are Leaving Religion. I’m not sure why this is surprising in a world where supernatural events are not observed and prayers aren’t answered. But it changes lots of societal dynamics and needs to be talked over.

From The Health Care Blog: America’s Health and The 2016 Election: An Unexpected Connection. The piece isn’t that long but I include it because of the graph at the top, entitled “Vitality anad the Vote”. It is astonishingly information-dense, the kind of thing that features in the books of Edward Tufte. Looking at it, I observe the following:

  1. The graph measures, not vote distribution by geography, but its first derivative, vote movement.

  2. There is more geography in the South and Midwest than the West and Northeast.

  3. Midwestern Americans are remarkably less healthy than those to their east and west, with the Southerners in between.

  4. [The headline.] Healthier people swung progressive, unhealthy ones to Trump. Obviously there’s a correlation here with age.

  5. I wish there was a mouse-over so I could ask about anomalously outlying counties.

The accompanying text is competent and useful, but wow, that graph.

I don’t know much about Alex Steffen, but his Climate-Crisis coverage has impressed me. The Last Hurrah, from his Substack, asks what a climate radical (like me) should think of Joe Biden’s progress thus far. Despite the obvious fact that Biden’s program is woefully inadequate in the face of onrushing disaster, Steffen finds grounds for optimism. Which is something that we can all use some of these days.

Now, here’s a high-impact story. Microsoft is going to change the default font in Windows from the current and reasonably-OK Calibri. Many people, as in billions, are going to spend a lot of time looking at screens-full of this stuff, so the future of humanity is significantly affected. Thanks to Scott Hanselman for posting a small sample. My response: We can rule out Seaford for its absurd ’ (right single quote). Skeena [swoosh] looks [swoosh] like it’s sponsored by Nike. Something in Grandview makes the kerning hurt. Bierstadt wins for me because the chars snuggle up together with a little more flow.

Finally, you want long-form? Dive into How the Pentagon Started Taking U.F.O.s Seriously, which is a monster. And never not interesting. And no, you probably don’t have to take UFOs seriously even if you enjoy reading a few thousand words about those who do, which you probably will — enjoy reading I mean, not taking seriously.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconTwilights 28 Apr 2021, 9:00 pm

I went out for a walk well into twilight time, put the camera in see-in-the-dark mode, fitted a fast friendly lens, and pointed it at pretty things.

Old East Vancouver house at twilight

This is in Eastern Vancouver (“East Van” everyone says), around where Riley Park becomes Mount Pleasant. A lot of the houses are old and, what with Vancouver’s real-estate craziness, probably doomed. Sadly, they won’t be missed very much I suspect.

Light display in an East Vancouver garden Light display in an East Vancouver garden

It’s awfully nice of people to put up lights simply for the enjoyment of light.

Modern cameras, what can I say. Just set the ISO at 12800, free the lens to find whatever aperture keeps the shutter less than 40 or so, and you effortlessly get all these moody shots that would have been simply impossible for any previous generation of photographers.

Streetlights and cherry blossoms in East Vancouver twilight

Those awful old sodium-yellow streetlights are too much with us. Modern lights have less-awful colors. The clash with the cherry blossoms may not be beautiful but is interesting.

Lights amid tulips in East Vancouver twilight

We always have tulips this time of year, but something about 2021 has driven East Van gardeners into tulip frenzy, perhaps as a relief from Covid-lockdown pain?

East Vancouver alley at twilight

Ah, that sodium shade. The alley looks kinda scary but was actually perfectly welcoming.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconAlgorithm Agility? 24 Apr 2021, 9:00 pm

What happened was, I was fooling around with zero-knowledge proof ideas and needed to post public keys on the Internet in textual form. I picked ed25519 keys (elliptic-curve, also known as EdDSA) so I asked the Internet “How do you turn ed25519 keys into short text strings?” The answer took quite a bit of work to find and, after I posted it, provoked a discussion about whether I was doing the right thing. So today’s question is: Should these things be encoded with the traditional PKIX/PEM serialization, or should developers just blast the key-bits into base64 and ship that?

Previously

Old-school key wrapping

Traditionally, as described in the blog piece linked above, the public key, which might be a nontrivial data structure, is serialized into a byte blob which includes not just the key bits but metadata concerning which algorithm applies, bit lengths, and hash functions.

When I say “old-school” I mean really old, because the technologies involved in the process (ASN.1, PKIX, PEM) date back to the Eighties. They’re complicated, crufty, hard to understand, and not otherwise used in any modern applications I’ve ever heard of.

Having said all that, with a couple of days of digging and then help from YCombinator commentators, the Go and Java code linked above is short and reasonably straightforward and pretty fast, judging from my unit testing, which round-trips a thousand keys to text and back in a tiny fraction of a second.

Since the key serialization includes metadata, this buys you “Algorithm Agility”, meaning that if the flavor of key you’re using (or its supporting hash or whatever) became compromised and untrustworthy, you can change flavors and the code will still work. Which sounds like a valuable thing.

There is, after all, the prospect of quantum computing, which assuming that they can ever get the hardware to do anything useful, could crack lots of modern crypto notably including ed25519. I know very smart people who are betting on quantum being right around the corner, and others, equally smart, who think it’ll never work. Or that if it does, it won’t scale.

The simpler way

Multiple commentators pointed out that ed25519 keys and signatures aren’t data structures, just byte arrays. Further, that there are no options concerning bit length or hash algorithm or anything else. Thus, arguably, all the apparatus in the section just above adds no value. In fact, by introducing all the PKIX-related libraries, you increase the attack surface and arguably damage your security profile.

Furthermore, they argue, ed25519 is not likely to fail fast; if the algorithms start creeping up on it, there’ll be plenty of time to upgrade the software. I can testify that I learned of multiple in-flight projects that are going in EdDSA-and-nothing-else. And, to muddy the waters, another that’s invented its own serialization with “a 2-3 byte prefix to future proof things.”

Existence proof

I’m working on a zero-knowledge proof where there are two or more different public posts with different nonces, the same public key, and signatures. The private key is discarded after the nonces are signed and the posts are generated, and keypairs aren’t allowed to be re-used. In this particular case it’s really hard to imagine a scenario where I’d feel a need to switch algorithms.

Conclusions?

The question mark is because none of these are all that conclusive.

  1. Algorithm agility is known to work, happens every time anyone sets up an HTTPS connection. It solves real problems.

    [Update: Maybe not. From Thomas Ptasek: Algorithm agility is bad… the essentially nonexistent vulnerabilities algorithm agility has mitigated over TLS's lifetime… ]

  2. Whether or not we think it’s reasonable for people to build non-agile software that’s hardwired to a particular algorithm in general or EdDSA in particular, people are doing it.

  3. I think it might be beneficial for someone to write a very short three-page RFC saying that, for those people, just do the simplest-possible Base64-ification of the bytes. It’d be a basis for interoperability. This would have the potential to spiral into a multi-year IETF bikeshed nightmare, though.

  4. There might be a case for building a somewhat less convoluted and crufty agility architecture for current and future public-key-based applications. This might be based on COSE? This would definitely be a multi-year IETF slog, but I dunno, it does seem wrong that to get agility we have to import forty-year-old technologies that few understand and fewer like.

The current implementation

It’s sort of the worst of both worlds. Since it uses the PKIX voodoo, it has algorithm agility in principle, but in practice the code refuses to process any key that’s not ed25519. There’s an argument that, to be consistent, I should either go to brute-force base64 or wire in real algorithm agility.

Having said that, if you do need to do the PKIX dance with ed25519, those code snippets are probably useful because they’re simple and (I think) minimal.

And another thing. If I’m going to post something on the Internet with the purpose of having someone else consume it, I think it should be in a format that is described by an IETF RFC or W3C Rec or other stable open specification. I really believe that pretty strongly. So for now I’ll leave it the way it is.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconHow to Interchange Ed25519 Keys 19 Apr 2021, 9:00 pm

Herewith pointers to Java 15 and Go code that converts Ed25119 public keys back and forth between short text strings and key objects you can use to verify signatures. The code isn’t big or complicated, but it took me quite a bit of work and time to figure out, and led down surprisingly dusty and ancient pathways. Posted to help others who need to do this and perhaps provide mild entertainment.
[Update 04/23: “agwa” over at YCombinator showed how to simplify the Go with x509.MarshalPKIXPublicKey and x509.ParsePKIXPublicKey.]

They call modern crypto “public-key”; because keys are public, people can post them on the Internet so other people’s code can use them, for example to verify signatures.

How, exactly, I wondered, would you go about doing that? The good news is that, in Go and Java 15 at least, there is good core library support for doing the necessary incantations, with no need to take any external dependencies.

If you don’t care about the history and weirdness, here’s the Go code and here’s the Java..

Now, on to the back story. But first…

Should I even do this?

“Don’t write your own crypto!” they say. And I’ve never been tempted. But I wonder if there’s a variant form that says “Don’t even use the professionally-crafted crypto libraries and fool around with keypairs unless you know what you’re doing!”

Because as I stumbled through the undergrowth figuring this stuff out in the usual way, via Google and StackOverflow, I did not convince myself that the path I was following was the right one. I wondered if there was a gnostic body of asymmetric-crypto lore and if I’d ever studied it I’d know the right way to do what I was trying do. And since I hadn’t studied it, I should just leave this stuff to the people who had. I’m not being ironic or sarcastic here, I really do wonder that.

Let’s give it a try

I’m building bits and pieces of software related to my @bluesky Identity scheme. I want a person to be able to post a “Zero-Knowledge Proof” which includes a public key and a signature so other people can check the signature — go check that link for details. I started exploring using the Go programming language, which has a nice generic crypto package and then a boatload of other packages for different key flavors, with names like aes and rsa and ed25519 that even a crypto peasant like me recognizes.

Ed25519

This is the new-ish hotness in public-key tech. It’s fast and very safe and produces small keys and signatures. It’s also blessedly free of necessary knobs to twist, for example you don’t have to think about hash functions. It’s not quantum-safe but for this particular application that’s a complete non-issue. I’ve come to like it a lot.

Anyhow, a bit of poking around the usual parts of the Internet suggested that what a good little Gopher wants is crypto/ed25519. For a while I was wondering if I might need to support other algorithms too; let’s push that on the stack for now.

Base64 and PEM and ASN.1, oh my!

I remembered that these things are usually in base64, wrapped in lines that look like -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY-----. A bit of Googling reminds me that this is PEM format, developed starting in 1985 as “Privacy-Enhanced Mail”. Suddenly I feel young! OK then, let’s respect the past.

Go has a pem package to deal with these things, works about as you’d expect. And what, I wondered, might I find inside?

Abstract Syntax Notation One

Everyone just says ASN.1. It’s even older than PEM, dating from 1984. I have a bit of a relationship with ASN.1, where by “a bit of a relationship” I mean that back in the day I hated it intensely. It was in the era before Open Source, meaning that if you wanted to process ASN.1-encoded data, you had to buy an expensive, slow, buggy, parser with a hostile API. I guess I’m disrespecting the past. And it doesn’t help that I have a conflict of interest; when I was in the cabal that cooked up XML, ASN.1 was clearly seen as Part Of The Problem.

Be that as it may, when you have a PEM you have ASN.1 and if you’re a Gopher you have a perfectly-sane asn1 package that’s fast and free. Yay!

Five Oh Nine

It turns out you can’t parse or generate ASN.1 without having a schema. For public keys (and for certs and lots of other things), the schemas come from X.509 which is I guess relatively modern, having been launched in 1988. I know little about it, but have observed that among security geeks, discussion of X.509 (later standardized in the IETF as PKIX) is often accompanied by head-shaking and sad faces.

Key flavors

OK, I’ve de-PEM-ified the data and fed the results to the ASN.1 reader, and now I have what seems to me like a simple and essential question: What flavor of key does this represent? Maybe because I want to toss anything that isn’t ed25519. Maybe because I want to dispatch to the right crypto package. Maybe because I just want to freaking know what this bag of bits claims to be, a question the Internet should have an answer for.

Ladies, gentlemen, and others, when you ask the Internet “How do you determine what kind of public key this is?” you come up empty. Or at least I did. Eventually I stumbled across Signing JWTs with Go's crypto/ed25519 by Blain Smith, to whom I owe a debt of thanks, because it doesn’t assume you know what’s going on, it just shows you step-by-step how to unpack a PEM of an ASN.1 of an ed25519 public key.

It turns out that what you need to do is dig into that ASN.1 data and pull out an “Object Identifier”. At which point my face brightened up because do I ever like self-describing data. So I typed “ASN.1 Object Identifier” into Google and, well, unbrightening set in.

We must go deeper

At which point I wrote a little Go program whose inputs were a random RSA-key PEM I found somewhere and the ed25519 example from Blain Smith’s blog. I extracted an Object Identifier from each and discovered that Object Identifiers are arrays of numbers; for the RSA key, 1.2.840.113549.1.1.1, and for the elliptic-curve key, 1.3.101.112.

So I googled those strings and “1.3.101.112” led me to Appendix A of RFC8420, which has a nice simple definition and a note that the real normative reference is RFC8410, whose Section 3 discusses the question but does not actually include the actual strings encoded by the ASN.1.

“Oh,” I thought, “there must be a helpful registry of these values!” There sort of is, which I found by pasting the string values into Google: The informal but reasonably complete OID Repository. Which, frankly, doesn’t look like an interface you want to bet your future on. But did confirm that 1.2.840.113549.1.1.1 means RSA and 1.3.101.112 means ed25119.

So I guess I could write code based on those findings. But first, I decided to write this. Because the journey was not exactly confidence-inspiring. After all, public-key cryptography and its infrastructure (usually abbreviated “PKI”) is fucking foundational to fucking Internet Security, by which these days I mean banking and payments and privacy and generally truth.

And I’m left hoping the trail written in fairy dust on cobwebs that I just finished following is a best practice.

Then I woke up

At this point I talked to a friend who is crypto-savvy and asked “Would it be OK to require just ed25519, hardwire that into the protocol and refuse to consider anything else?” They said: “Yep, because first of all, these are short-lived signatures and second, if ed25519 fails, it’ll fail slowly and there’ll be time to migrate to something else.” Which considerably simplified the problem.

And by this time I understood that the conventional way to interchange these things is as base64’ed encoding of ASN.1 serializations of PKIX-specified data structures. It would have been nice if one of my initial searches had turned up a page saying just that. And it turns out that there are libraries to do these things, and that they’re built into modern programming languages so you don’t have to take dependencies.

G, meet J

So what I did was write two little libraries, one each in Go and Java, to translate public keys back and forth between native progam objects and base64, either encoded in the BEGIN/END cruft or not.

First let’s go from in-program data objects to base64 strings. In Go you do like so:

  1. Use the x509 package’s MarshalPKIXPublicKey method to turn the key into the ASN.1-serialized bytes.

  2. Base64 the bytes. You’re done!

In Java it’s like this:

  1. Use the getEncoded method of PublicKey to get the ASN.1-serialization byte sequence.

  2. Base64 the bytes, you’re done.

By the way, the base64 is only sixty characters long, gotta love that ed25519.

The next task is to read that textually-encoded public key into its native form as a programming-language object that you can use to verify signatures. In Go:

  1. De-base64 the string into bytes.

  2. Use the x509 package’s ParsePKIXPublicKey method to do the ASN.1 voodoo and extract the public key.

  3. The key comes back as an interface{} so you have to do some typecasting to get the ed25519 key, but then you can just return it.

For Java:

  1. Discard the BEGIN and END crap if it’s there.

  2. Decode the remaining Base64 to yield a stream of bytes, which is the ASN.1 serialization of the data structure.

  3. Now you need an ed25519-specialized KeyFactory instance, for which there’s a getInstance method.

  4. Now you make a new X509EncodedKeySpec by feeding the ASN.1 serialized bytes to its constructor.

  5. Now your KeyFactory can generate a public key if you feed the X509 thing to its generatePublic method. You’re done!

Other languages?

It might be of service to the community for someone else to bash equivalent tools out in Python and Rust and JS and whatever else, which would be a good thing because public keys are a good thing and ed25519 is a good thing.

References

Useful blogs & articles:

  1. Java EdDSA (Ed25519 / Ed448) Example

  2. How to Read PEM File to Get Public and Private Keys

  3. Export & Import PEM files in Go

  4. Signing JWTs with Go’s crypto/ed25519

  5. Ed25519 in JDK 15, Parse public key from byte array and verify. I think this one is a hot mess, but it had high Google Juice, so consider this pointer cautionary. I ended up not having to do any of this stuff.

  6. ASN.1 key structures in DER and PEM

RFCs:

  1. 8419: Use of Edwards-Curve Digital Signature Algorithm (EdDSA) Signatures in the Cryptographic Message Syntax (CMS)

  2. 8032: Edwards-Curve Digital Signature Algorithm (EdDSA)

  3. 7468: Textual Encodings of PKIX, PKCS, and CMS Structures

  4. 8410: Algorithm Identifiers for Ed25519, Ed448, X25519, and X448 for Use in the Internet X.509 Public Key Infrastructure

  5. 8420: Using the Edwards-Curve Digital Signature Algorithm (EdDSA) in the Internet Key Exchange Protocol Version 2 (IKEv2)

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconSpring Flowers, 2021 11 Apr 2021, 9:00 pm

48 hours ago I got my first Covid-19 vaccine dose, and today I took the camera for a stroll, hunting spring flowers. What a long strange trip it’s been.

Timothy Bray was vaccinated April 9th, 2021

Should I be concerned that the drugstore guy didn’t bother to sign? By the way, CHADOX1-S RECOMBINANT is better known as Astra Zeneca.

Vaccinated how?

They’re currently working their way through really old people and other targeted groups like teachers and some industrial workers with the Pfizer and Moderna. There seem to be a fair number of AZ doses arriving, and they’re not recommended for people under 55. So those of us in the 55-65 bracket can sign up at pharmacy branches; I did a couple of weeks back and got an SMS Friday morning.

I felt really baffed out and sore the day after, and just a bit sore today; nothing a bit of Ibuprofen can’t handle. Apparently in Canada we’re on multi-month delay between shots; so it’s not clear when I can go see my Mom, who got her first Pfizer dose on March 19th.

April 2021

It’s been a cold blustery spring but that doesn’t seem to bother the botanicals, especially the fruit trees, some of whom have already peaked.

Flowering fruit tree in Vancovuer’s Riley Park neighborhood

Spot the clothesline.

Nobody I know closely has been struck down by Covid, but people I love are suffering from one ailment or another as I write because that’s how life is. I live in a country with seasons, which means we are all subject to morale-boosting sensory stimuli at this time of year. Grab hold of them! We can all use all the help we can get.

Tiny flowers, April in Vancouver

From here on in, the pictures are courtesy of the strong-willed 40-year-old Pentax 100/F2.8.

What comes next, as the vaccinated proportion of the population grows monotonically (but asymptotically) and then the wave of second doses washes up behind the first’s?

I guess I’m talking about people in the privileged parts of the world, since it looks like the spread of vaccinations will be measurably, irrefutably, deeply racist and joined at the hip with the world’s egregiously awful class structure.

I just want to go to a rock concert.

These rhodos are ready to burst.

Rhododendron blossoms about to open

I’m kind of jaded about daffodils but this one was so pretty against the backdrop that I couldn’t resist.

Daffodil with something pink behind

I hope everyone reading this has something to look forward to so that getting out of bed Monday morning is more than just a chore. Failing that, if you’re in the Northern Hemisphere anyhow, there are incoming flowers. Tell them I said hello.

Mixed flowers in April in Vancouver

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconThe Sacred “Back” Button 10 Apr 2021, 9:00 pm

Younger readers will find it hard to conceive of a time in which every application screen didn’t have a way to “Go Back”. This universal affordance was there, a new thing, in the first Web browser that anyone saw, and pretty soon after that, more or less everything had it. It’s a crucial part of the user experience and, unfortunately, a lot of popular software is doing it imperfectly. Let’s demand perfection.

Why it matters

Nobody anywhere is smart enough to build an application that won’t, in some situations, confuse its users. The Back option removes fear and makes people more willing to explore features, because they know they can always back out. It was one of the reasons why the nascent browsers were so much better than the Visual Basic, X11, and character-based interface dinosaurs that then stomped the earth.

Thus I was delighted, at the advent of Android, that the early phones had physical “back” buttons.

Early Android “G1” phone

The Android “G1 Developer Phone”, from 2008. Reproduced from my Android Diary series, which I flatter myself was influential at the time.

I got so excited I wrote a whole blog piece about it.

Nowadays Android phones don’t have the button, but do offer a universal “Back” gesture and, as an Android developer, you don’t have to do anything special to get sane, user-friendly behavior. I notice that when I use iOS apps, they always provide a back arrow somewhere up in the top left corner; don’t know if that costs developers extra work.

Imperfections

The most important reason I’m listing these problems is to offer a general message: When you’re designing your UX, think hard about the Back affordance! I have seen intelligent people driven to tears when they get stuck somewhere and can’t back out.

People using your software generally have a well-developed expectation of what Back should do at any point in time, and any time you don’t meet that expectation you’ve committed a grievous sin, one should remedy right now.

Problem: The Android Back Stack

Since we started with mobile, let’s talk Android. The “Activities” that make up Android apps naturally form a Back Stack as you follow links from one to the other. It turns out that Android makes it possible to compose and manipulate your stack. One example would be Twitter, which occasionally mails me about people I follow having tweeted something that it thinks might interest me, when I haven’t been on for a while. It’s successful enough that I haven’t stopped it.

When I click on the link, it leaps straight to the tweet in question. But when I hit Back, I don’t return to my email because Twitter has interposed multiple layers of itself on the stack. So it takes several hops to get back to where I followed the link from.

This is whiny, attention-starved, behavior. I’m not saying that Back should always 100% revert to the state immediately before the forward step; I’ve seen stack manipulation be useful. But this isn’t, it’s just pathetic.

Problem: SPA foolery

When, in my browser, I click on something and end up looking at something, and then I’m tired of looking at it and go Back, I should go back to where I started. Yes, I know there’s a convention, when an image pops up, that it’ll go away if you hit ESC. And there’s nothing wrong with that. But Back should work too. The Twitter Web interface does the right thing here when I open up a picture. ESC works, but so does Back.

Feedly doesn’t get this right; if you’re in a feed and click a post, it pops to the front and hitting ESC is the only to make it go away; Back takes you to a previous feed! (Grrrr.) Also zap2it, where I go for TV listings, has the same behavior; Back takes you right out of the listing.

(Zap2it demonstrates another problem. I have my local listings in a mobile browser bookmark on one of my Android screens, which opens up Chrome with the listings. Except for the tab is somehow magically different, if I flip away from Chrome then return to it, the tab is gone. Hmm.)

Problem: Chrome new-tab links

When I’m in Chrome and follow a link that opens a new tab, Back just doesn’t work. If I close the tab, for example with Command-W on Mac, it hops back to the source link. In what universe is this considered a good UX? To make things worse, there are many things that make Chrome forget the connection between the tab in question and its source link, for example taking a look at a third tab. Blecch.

Fortunately, Safari is saner. Well, mostly…

Problem: Safari new-tab links

I’m in Safari and I follow a link from my Gmail tab to wherever and it ends up in a new tab. Then, when I hit Back, there I am back in Gmail, as a side-effect closing the freshly-opened tab. Which is sane, rational, unsurprising behavior. (It’s exactly the same effect you get by closing the tab on Chrome, which is why Chrome should use Back to achieve that effect.)

Except, did I mention that the browser forgets? Safari’s back-linkage memory is encoded in letters of dust inscribed on last month’s cobwebs. More or less any interaction with the browser and it’s No More “Back” For You, Kid.

Especially irritating are pages that intercept scroll-down requests with a bunch of deranged JavaScript fuckery (that I darkly suspect is aimed at optimizing ad exposures) and that my browser interprets as substantive enough to mean “No More Back For You”. I swear I worry about farting too loudly because that might give Safari an excuse to forget, out of sheer prissiness.

Please let ’em back out

Screwing with the path backward through your product is inhumane, stupid, and unnecessary. Don’t be the one that gets in people’s way when they just want to step back.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconLong Links 1 Apr 2021, 9:00 pm

Welcome to the monthly “Long Links” post for March 2021, in which I take advantage of my lightly-employed status to curate a list of pointers to good long-form stuff that I have time to savor but you probably don’t, but which you might enjoy one or two of. This month there’s lots of video, a heavier focus on music, and some talk about my former employer.

What with everything else happening in the world, people who are outside of Australia may not have noticed that they had a nasty sex scandal recently. My sympathy to the victims, and my admiration goes out to Australian of the Year Grace Tame's full National Press Club address, which is searing and heart-wrenching. I don’t know much else about what Ms Tame has done, but I’d award her the honor for just the speech, which a lot of people need to listen to. Probably including you; I know I did.

"The ocean takes care of that for us", by Fiona Beaty, thinks elegantly and eloquently about the relationship between oceanfront humans and the ocean. Indigenous nations saw the ocean as a self-sustaining larder, and it could be that again. Assuming we can learn to act like adults in our relationship to the planet we live on.

I’m a music lover and an audio geek, and like most such people, have long lamented the brutal dynamic-range compression applied to popular music with the goal of making sure that it’s never not as loud as the other songs it’s being sequenced with on the radio in car. The New Standard That Killed the Loudness War points out that the music-streaming landscape, a financial wasteland for musicians mind you, is at least friendlier to accuracy in audio. I especially love it when a live band gets into a vamp and the singer says “take it down, now”, and they drift down then surge back. No reason pop recordings shouldn’t use that technique; it’s basic to classical music. Now they can.

What Key is Hey Joe In? (YouTube). By watching this I learned that Hendrix didn’t actually write Hey Joe. It’s not 100% clear who actually did write it, and it’s also unclear what key it’s in. Adam Neely has an unreasonable amount of fun exploring this, and there isn’t a simple answer. The most useful thing you can say is that it’s designed to sound good on a conventionally-tuned guitar. If you’re not literate in music theory you’ll miss some of the finer points, but you might still enjoy this.

One of the pointers I followed out of this video was to a massive blog piece by Ethan Hein entitled Blues tonality, which I’m going to say covers the subject exhaustively but also entertainingly, with lots of cool embedded videos to reinforce his musical points. Some of which aren’t music that any sane person would think of as Blues.

And still more music! My streaming service offered up a number by Emancipator and I found myself thinking “Damn, that’s beautiful, who is it?” Behind the name “Emancipator” is Douglas Appling, a producer and DJ who decided to be a musician too, and am I ever glad he did. The music is mostly pretty smooth and you might be forgiven for thinking “nice chill lightweight stuff” but I think there’s a lot of there there. The DJ influence is pretty plain, but to me, this sounds more like classical music than anything else; carefully composed and sequenced with a lot of attention to highlighting the timbres of the instruments. Mr Appling has put together a band to take the music on the road and I think it’d be a fun show. Here’s a YouTube: Emancipator Ensemble, live in 2018.

I hate to end the musical segment here on a downer, but remember how I mentioned that the streaming landscape is a place where musicians go to starve? Islands in the Stream dives at length into the troubled and massively dysfunctional relationship between music and the music business. This picture has been rendered still darker, of course, by Covid, which has taken musicians off the road, the last place where they can sometimes make a decent buck for offering decent music. At some point a truly enlightened government will introduce a minimum wage for musicians, which means that the price you pay to stream will probably have to go up; Sorry not sorry.

Let’s move over to politics. Like most people, I read Five Thirty Eight for the poll-wrangling and stats, but sometimes they unleash a smart writer in an interesting direction. The smart writer in this case is Perry Bacon, Jr; in The Ideas That Are Reshaping The Democratic Party And America, he itemizes the current progressive consensus in clear and even-handed language. This being 538, there are of course numbers and graphs, and a profusion of links to source data, making this what I would call a scholarly work. Most important phrase, I think: “many of these views are evidence-based — rooted in a lot of data, history and research.” The piece is the first of a two-part series. Next up is Why Attacking ‘Cancel Culture’ And ‘Woke’ People Is Becoming The GOP’s New Political Strategy. In case you hadn’t noticed, the American right, so far this year, has largely abandoned discussing actual policy issues and has retreated into an extended howl of outrage about how “woke” people are trampling free speech via “cancel culture”. Since this is coming from a faction that enjoys being led by Donald Trump, it’s too much te expect integrity or intellectual rigor in their arguments. But from an analytical point of view, who cares? What matters is whether or not the stratagem will work. The evidence on that is, well, mixed.

These days, a lot of politics coverage seems to involve my former employer Amazon. How Amazon’s Anti-Union Consultants Are Trying to Crush the Labor Movement is not trying to convince you of anything, it is simply a tour through America’s anti-unionization establishment and the tools Amazon has been deploying nationwide and in Alabama. They’re spending really a lot of money. What on Earth Is Amazon Doing? is a well-written survey of the company’s late-March social-media offensive, kicking sand in legislators’ faces and pooh-poohing the peeing-in-bottles stories. Amazon is a well-run company but nobody would call this a well-run PR exercise. Is this a well-thought-out eight-dimensional chess move, or did leadership just briefly lose its shit?

The most important Amazon-related piece, I thought, was A Shopper’s Heaven by Charlie Jarvis in Real Life Magazine, which I’ve not previously encountered. It’s building on the same territory that I did in Just Too Efficient (by a wide margin the most radical thing I’ve ever published) — at some point, the relentless pursuit of convenience and efficiency becomes toxic, and we are way way past that point.

OK, enough about Amazon. But let’s beat up on the tech business some more, just for fun this time, with How to Become an Intellectual in Silicon Valley, an exquisitely pissy deconstruction of Bay Aryan thought leaders. Yes, it is indeed mean-spirited, but seriously, those people brought it on themselves.

I recommend John Scalzi’s Teaching “The Classics”, which wonders out loud why high-school students still have Hawthorne and Fitzgerald inflicted on them. There is one faction who feels that those Books By Dead White Guys are essential in crafting a well-rounded human, and others who argue that it’s time to walk away from those monuments to overwriting built on foundations most well-educated people now find morally repugnant. Scalzi finds fresh and entertaining things to say on the subject.

Let’s try to end on a high note. There was a news story about Wikipedia noticing that their content is mined and used by multiple for-profit concerns, using access methods (I’m not going to dignify them with the term “APIs”) that are not designed for purpose. Following on this, they had the idea of building decent APIs to make it convenient, reliable, and efficient to harvest Wikipedia data, and charging for their use, thus generating a revenue stream for long-term support of Wikipedia’s work. This is both promising and perilous — fuck with the Wikipedia editorial community’s loathing for most online business models at your peril. Anyhow, Wikimedia Enterprise/Essay is the best insider’s look at the idea that I’ve run across. [Disclosure: I’ve had a couple of conversations with these people because I’d really like to help.]

And finally, a tribute to one of my personal favorite online nonprofits: The internet is splitting apart. The Internet Archive wants to save it all forever. Just read it.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconTopfew+Amdahl.next 31 Mar 2021, 9:00 pm

I’m in fast-follow mode here, with more Topfew reportage. Previous chapters (reverse chrono order) here, here, and here. Fortunately I’m not going to need 3500 words this time, but you probably need to have read the most recent chapter for this to make sense. Tl;dr: It’s a whole lot faster now, mostly due to work from Simon Fell. My feeling now is that the code is up against the limits and I’d be surprised if any implementation were noticeably faster. Not saying it won’t happen, just that I’d be surprised. With a retake on the Amdahl’s-law graphics that will please concurrency geeks.

What we did

I got the first PR from Simon remarkably soon after posting Topfew and Amdahl. All the commits are here, but to summarize: Simon knows the Go API landscape better than I do and also spotted lots of opportunities I’d missed to avoid allocating or extending memory. I spotted one place where we could eliminate a few million map[] updates.

Side-trip: map[] details

(This is a little bit complicated but will entertain Gophers.)

The code keeps the counts in a map[string]*uint64. Because the value is a pointer to the count, you really only need to update the map when you find a new key; otherwise you just look up the value and say something like

countP, exists = counts[key]
if exists {
  *countP++
} else {
  // update the map
}

It’d be reasonable to wonder whether it wouldn’t be simpler to just say:

*countP++
map[key] = countP // usually a no-op

Except for, Topfew keeps its data in []byte to avoid creating millions of short-lived strings. But unfortunately, Go doesn’t let you key a map with []byte, so when you reference the map you say things like counts[string(keybytes)]. That turns out to be efficient because of this code, which may at first glance appear an egregious hack but is actually a fine piece of pragmatic engineering: Recognizing when a map is being keyed by a stringified byte slice, the compiled code dodges creating the string.

But of course if you’re updating the map, it has to create a string so it can retain something immutable to do hash collision resolution on.

For all those reasons, the if exists code above runs faster than updating the map every time, even when almost all those updates logically amount to no-ops.

Back to concurrency

Bearing in mind that Topfew works by splitting up the input file into segments and farming the occurrence-counting work out to per-segment threads, here’s what we got:

  1. A big reduction in segment-worker CPU by creating fewer strings and pre-allocating slices.

  2. Consequent on this, a big reduction in garbage creation, which matters because garbage collection is necessarily single-threaded to some degree and thus on the “critical path” in Amdahl’s-law terms.

  3. Modest speedups in the (single-threaded, thus important) occurrence-counting code.

But Simon was just getting warmed up. Topfew used to interleave filtering and field manipulation in the segment workers with a batched mutexed call into the single-threaded counter. He changed that so the counting and ranking is done in the segment workers and when they’re all finished they send their output through a channel to an admirably simple-minded merge routine.

Anyhow, here’s a re-take of the Go-vs-Rust typical-times readout from last time:

Go (03/27): 11.01s user 2.18s system  668% cpu 1.973 total
      Rust: 10.85s user 1.42s system 1143% cpu 1.073 total
Go (03/31):  7.39s user 1.54s system 1245% cpu 0.717 total

(Simon’s now a committer on the project.)

Amdahl’s Law in pictures

This is my favorite part.

First, the graph that shows how much parallelism we can get by dividing the file into more and more segments. Which turns out to be pretty good in the case of this particular task, until the parallel work starts jamming up behind whatever proportion is single-threaded.

You are rarely going to see a concurrency graph that’s this linear.

CPU usage as a function of the number of cores requested

Then, the graph that shows how much execution speeds up as we deal the work out to more and more segments.

Elapsed time as a function of the number of cores requested

Which turns out to be a lot up front then a bit more, than none at all, per Amdahl’s Law.

Go thoughts

I like Go for a bunch of reasons, but the most important is that it’s a small simple language that’s easy to write and easy to read. So it bothers me a bit that to squeeze out these pedal-to-the-metal results, Simon and I had to fight against the language a bit and use occasionally non-intuitive techniques.

On the one hand, it’d be great if the language squeezed the best performance out of slightly more naive implementations. (Which by the way Java is pretty good at, after all these years.) On the other hand, it’s not that often that you really need to get the pedal this close to the metal. But command-line utilities applied to Big Data… well, they need to be fast.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconTopfew and Amdahl 27 Mar 2021, 8:00 pm

On and off this past year, I’ve been fooling around with a program called Topfew (GitHub link), blogging about it in Topfew fun and More Topfew Fun. I’ve just finished adding a few nifty features and making it much faster; I’m here today first to say what’s new, and then to think out loud about concurrent data processing, Go vs Rust, and Amdahl’s Law, of which I have a really nice graphical representation. Apologies because this is kind of long, but I suspect that most people who are interested in either are interested in both.

Reminder

What Topfew does is replace the sort | uniq -c | sort -rn | head pipeline that you use to do things like find the most popular API call or API caller by ploughing through a logfile.

When last we spoke…

I had a Topfew implementation in the Go language running fine, then Dirkjan Ochtman implemented it in Rust, and his code ran several times faster than mine, which annoyed me. So I did a bunch more optimizations and claimed to have caught up, but I was wrong, for which I apologize to Dirkjan — I hadn’t pulled the most recent version of his code.

One of the big reasons Dirkjan’s version was faster was that he read the input file in parallel in segments, which is a good way to get data processed faster on modern multi-core processors. Assuming, of course, that your I/O path has good concurrency support, which it might or might not.

[Correction: Thomas Jung writes to tell me he implemented the parallel processing in rust_rs. He wrote an interesting blog piece about it, also comparing Xeon and ARM hardware.]

So I finally got around to implementing that and sure enough, runtimes are way down. But on reasonable benchmarks, the Rust version is still faster. How much? Well, that depends. In any case both are pleasingly fast. I’ll get into the benchmarking details later, and the interesting question of why the Rust runs faster, and whether the difference is practically meaningful. But first…

Is parallel I/O any use?

Processing the file in parallel gets the job done really a lot faster. But I wasn’t convinced that it was even useful. Because on the many occasions when I’ve slogged away trying to extract useful truth from big honkin’ log files, I almost always have to start with a pipeline full of grep and sed calls to zero in on the records I care about before I can start computing the high occurrence counts.

So, suppose I want to look in my Apache logfile to find out which files are being fetched most often by the popular robots, I’d use something like this:

egrep 'googlebot|bingbot|Twitterbot' access_log | \
    awk ' {print $7}' | sort | uniq -c | sort -rn | head

Or, now that I have Topfew:

egrep 'googlebot|bingbot|Twitterbot' access_log | tf -f 7

Which is faster than the sort chain but there’s no chance to parallelize processing standard input. Then the lightbulb went on…

If -f 1 stands in for awk ' { print $1}' and distributes that work out for parallel processing, why shouldn’t I have -g for grep and -v for grep -v and -s for sed?

Topfew by example

To find the IP address that most commonly hits your web site, given an Apache logfile named access_log:

tf -f 1 access_log

Do the same, but exclude high-traffic bots. The -v option has the effect of grep -v.

tf -f 1 -v googlebot -v bingbot (omitting access_log)

The opposite; what files are the bots fetching? As you have probably guessed, -g is like grep.

tf -f 7 -g 'googlebot|bingbot|Twitterbot'

Most popular IP addresses from May 2020.

tf -f 1 -g '\[../May/2020'

Let’s rank the hours of the day by how much request traffic they get.

tf -f 4 -s "\\[[^:]*:" "" -s ':.*$' '' -n 24

So Topfew distributes all that filtering and stream-editing out and runs it in parallel, since it’s all independent, and then pumps it over (mutexed, heavily bufffered) to the (necessarily) single thread that does the top-few counting. All of the above run dramatically faster than their shell-pipeline equivalents. And they weren’t exactly rocket science to build; Go has a perfectly decent regexp library that even has a regexp.ReplaceAll call that does the sed stuff for you.

I found that getting the regular expressions right was tricky, so Topfew also has a --sample option that prints out what amounts to a debug stream showing which records it’s accepting and rejecting, and how the keys are being stream-edited.

Almost ready for prime time

This is now a useful tool, for me anyhow. It’s replaced the shell pipeline that I use to see what’s popular in the blog this week. The version on github right now is pretty well-tested and seems to work fine; if you spend much time doing what at AWS we used to call log-diving, you might want to grab it off GitHub.

In the near future I’m going to use GoReleaser so it’ll be easier to pick up from whatever your usual tool depot is. And until then, I reserve the right to change option names and so on.

On the other hand, Dirkjan may be motivated to expand his Rust version, which would probably be faster. But, as I’m about to argue, the speedup may not be meaningful in production.

Open questions

There are plenty.

  1. Why is Rust faster than Go?

  2. How do you measure performance, anyhow…

  3. … and how do you profile Go?

  4. Shouldn’t you use mmap?

  5. What does Gene Amdahl think about concurrency, and does Topfew agree?

  6. Didn’t you do all this work a dozen years ago?

Which I’ll take out of order.

How to measure performance?

I’m using a 3.2GB file containing 13.3 million lines of Apache logfile, half from 2007 and half from 2020. The 2020 content is interesting because it includes the logs from around my Amazon rage-quit post, which was fetched more than everything else put together for several weeks in a row; so the data is usefully non-uniform.

The thing that makes benchmarking difficult is that this kind of thing is obviously I/O-limited. And after you’ve run the benchmark a few times, the data’s migrated into memory via filesystem caching. My Mac has 32G of RAM so this happens pretty quick.

So what I did was just embrace this by doing a few setup runs before I started measuring anything, until the runtimes stabilized and presumably little to no disk I/O is involved. This means that my results will not replicate your experience when you point Topfew at your own huge logfile which it actually has to read off disk. But the technique does allow me to focus in on, and optimize, the actual compute.

How do you profile Go?

Go comes with a built-in profiler called “pprof”. You may have noticed that the previous sentence does not contain a link, because the current state of pprof documentation is miserable. The overwhelming googlejuice favorite is Profiling Go Programs from the Golang blog in 2011. It tells you lots of useful things, but the first thing you notice is that the pprof output you see in 2021 looks nothing like what that blog describes.

You have to instrument your code to write profile data, which is easy and seems to cause shockingly little runtime slowdown. Then you can get it to provide a call graph either as a PDF file or in-browser via its own built-in HTTP server. I actually prefer the PDF because the Web presentation has hair-trigger pan/zoom response to the point that I have trouble navigating to the part of the graph I want to look at.

While I’m having trouble figuring out what some of the numbers mean, I think the output is saying something that’s useful; you can be the judge a little further in.

Why is Rust faster?

Let’s start by looking at the simplest possible case, scanning the whole log to figure out which URL was retrieved the most. The required argument is the same on both sides: -f 7. Here is output from typical runs of the current Topfew and Dirkjan’s Rust code.

  Go: 11.01s user 2.18s system  668% cpu 1.973 total
Rust: 10.85s user 1.42s system 1143% cpu 1.073 total

The two things that stick out is that Rust is getting better concurrency and using less system time. This Mac has eight two-thread cores, so neither implementation is maxing it out. Let’s use pprof to see what’s happening inside the Go code. BTW if someone wants to look at my pprof output and explain how I’m woefully misusing it, ping me and I’ll send it over.

The profiling run’s numbers: 11.97s user 2.76s system 634% cpu 2.324 total; like I said, profiling Go seems to be pretty cheap. Anyhow, that’s 14.73 seconds of compute between user and system. The PDF of the code graph is too huge to put inline, but here it is if you want a look. I’ll excerpt screenshots. First, here’s one from near the top:

Top of the Go profile output

So, over half the time is in ReadBytes (Go’s equivalent of ReadLine); if you follow that call-chain down, at the bottom is syscall, which consumes 55.36%. I’m not sure if these numbers are elapsed time or compute time and I’m having trouble finding help in the docs.

Moving down to the middle of the call graph:

Near the middle of the Go profile output

It’s complicated, but I think the message is that Go is putting quite a bit of work into memory management and garbage collection. Which isn’t surprising, since this task is a garbage fountain, reading millions of records and keeping hardly any of that data around.

The amount of actual garbage-collection time isn’t that big, but I also wonder how single-threaded it is, because as we’ll see below, that matters a lot.

Finally, down near the bottom of the graph:

Near the bottom of the Go profile output

The meaning of this is not obvious to me, but the file-reading threads use the Lock() and Unlock() calls from Go’s sync.Mutex to mediate access to the occurrence-counting thread. So what are those 2.02s and 1.32sec numbers down at the bottom of a “cpu” graph? Is the implementation spending three and a half seconds implementing mutex?

You may notice that I haven’t mentioned application code. That code, for pulling out the seventh field and tracking the top-occurring keys, seems to contribute less than 2% of the total time reported.

My guesses

Clearly, I need to do more work on making better use of pprof. But based on my initial research, I am left with the suspicions that Rust buffers I/O better (less system time), enjoys the benefits of forcing memory management onto the user, and (maybe) has a more efficient wait/signal primitive. No smoking pistols here.

I’m reminded of an internal argument at AWS involving a bunch of Principal Engineers about which language to use for something, and a Really Smart Person who I respect a lot said “Eh, if you can afford GC latency use Go, and if you can’t, use Rust.”

Shouldn’t you use mmap?

Don’t think so. I tried it on a few different systems and mmap was not noticeably faster than just reading the file. Given the dictum that “disk is the new tape”, I bet modern filesystems are really super-optimized at sweeping sequentially through files, which is what Topfew by definition has to do.

What does Gene Amdahl think about concurrency, and does Topfew agree?

Amdahl’s Law says that for every computing task, some parts can be parallelized and some can’t. So the amount of speedup you can get by cranking the concurrency is limited by that. Suppose that 50% of your job has to be single-threaded: Then even with infinite concurrency, you can never even double the overall speed.

For Topfew, my measurements suggest that the single-threaded part — finding top occurrence counts — is fairly small compared to the task of reading and filtering the data. Here’s a graph of a simple Topfew run with a bunch of different concurrency fan-outs.

Graph of Topfew performance vs core count

Which says: Concurrency helps, but only up to a point. The graph stops at eight because that’s where the runtime stopped decreasing.

Let’s really dig into Amdahl’s Law. We need to increase the compute load. We’ll run that query that focuses on the popular bots. First of all I did it the old-fashioned way:

egrep 'googlebot|bingbot|Twitterbot' test/data/big | bin/tf -f 7
    90.82s user 1.16s system 99% cpu 1:32.48 total

Interestingly, that regexp turns out to be pretty hard work. There was 1:32:48 elapsed, and the egrep user CPU time was 1:31. So the Topfew time vanished in the static. Note that we only used 99% of one CPU. Now let’s parallelize, step by step.

Reported CPU usage vs. number of cores

Look at that! As I increase the number of file segments to be scanned in parallel, the reported CPU usage goes up linearly until you get to about eight (reported: 774%CPU) then starts to fall off gently until it maxes out at about 13½ effective CPUs. Two questions: Why does it start to fall off, and what does the total elapsed time for the job look like?

Elapsed time as a function of number of cores

Paging Dr. Amdahl, please! This is crystal-clear. You can tie up most of the CPUs the box you’re running on has, but eventually your runtime is hard-limited by the part of the problem that’s single-threaded. The reason this example works so well is that the grep-for-bots throws away about 98.5% of the lines in the file, so the top-occurrences counter is doing almost no meaningful work, compared to the heavy lifting by the regexp appliers.

That also explains why the effective-CPU-usage never gets up much past 13; the threads can regexp through the file segments in parallel, but eventually there’ll be more and more waiting for the single-threaded part of the system to catch up.

And exactly what is the single-threaded part of the system? Well, my own payload code that counts occurrences. But Go brings along a pretty considerable runtime that helps most notably with garbage collection but also with I/O and other stuff. Inevitably, some proportion of it is going to have to be single-threaded. I wonder how much of the single-threaded part is application code and how much is Go runtime?

I fantasize a runtime dashboard that has pie charts for each of the 16 CPUs showing how much of their time is going into regexp bashing, how much into occurrence counting, how much into Go runtime, and how much into operating-system support. One can dream.

Update: More evidence

Since writing this, I’ve added a significant optimization. In the (very common) case where there’s a single field being used for top-few counting, I don’t copy any bytes, I just use a sub-slice of the “record” slice. Also, Simon Fell figured out a way to do one less string creation for regexp filtering. Both of these are in the parallelizable part of the program, and neither made a damn bit of difference on elapsed times. At this point, the single-threaded code, be it in Topfew or in the Go runtime, seems to be the critical path.

How many cores should you use?

It turns out that in Go there’s this API called runtime.NumCPU() that returns how many processors Go thinks it’s running on; it returns 16 on my Mac. So by default, Topfew divides the file into that many segments. Which, if you look at the bottom graph above, is suboptimal. It doesn’t worsen the elapsed time, but it does burn a bunch of extra CPU to no useful effect. Topfew has a -w (or --width) option to let you specify how many file segments to process concurrently; maybe you can do better?

I think the best answer is going to depend, not just on how many CPUs you have, but on what kind of CPUs they are, and (maybe more important) what kind of storage you’re reading, how many paths it has into memory, how well its controller interleaves requests, and so on. Not to mention RAM caching strategy and other things I’m not smart enough to know about.

Didn’t you do all this work a dozen years ago?

Well, kind of. Back in the day when I was working for Sun and we were trying to sell the T-series SPARC computers which weren’t that fast but had good memory controllers and loads of of CPU threads, I did a whole bunch of research and blogging on concurrent data processing; see Concur.next and The Wide Finder Project. Just now I glanced back at those (wow, that was a lot of work!) and to some extent this article revisits that territory. Which is OK by me.

Next?

Well, there are a few obvious features you could add to Topfew, for example custom field separators. But at this point I’m more interested in concurrency and Amdahl’s law and so on. I’ve almost inevitably missed a few important things in this fragment and I’m fairly confident the community will correct me.

Looking forward to that.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

Faviconxmlwf -k 24 Mar 2021, 8:00 pm

What happened was, I needed a small improvement to Expat, probably the most widely-used XML parsing engine on the planet, so I coded it up and sent off a PR and it’s now in release 2.3.0. There’s nothing terribly interesting about the problem or the solution, but it certainly made me think about coding and tooling and so on. (Warning: Of zero interest to anyone who isn’t a professional programmer.)

Back story

As I mentioned last month, I took a little programming job partly as a favor to a friend, of writing a parser to transmute a huge number of antique IBM GML files into XML. It wasn’t terribly hard but there was quite a bit of input variation so I couldn’t be confident unless I checked that every single output file was proper XML (“well-formed”, we XML geeks say).

Fortunately there’s an Expat-based command-line tool called xmlwf that can scan XML files for errors and produce useful human-readable complaints, and it operates at obscene speed. So what I wanted to do was run my parser over a few hundred GML files and then say, essentially, xmlwf * in the output directory.

Which didn’t work because, until very recently, xmlwf would just stop when it encountered the first non-well-formed file. So I added a -k option (“k” for “keep going”) so it could run over a thousand or so files and helpfully complain about the two that were broken.

Lessons from the PR

Most important, I hadn’t realized how great the programming environment is inside Amazon. It’s all git, but there’s no need for branches or PR’s. You make your changes, you commit, you use the tooling to launch a code review, you argue, you make more changes, you (probably) commit --amend (unless you think multiple commits are more instructive for some reason), and this repeats until everyone’s happy and you push into the CI/CD vortex.

Obviously other people might be working on the same stuff so you might have to do a git pull --rebase and there might be pain sorting out the results but that’s what they pay us for. (Right?)

Anyhow, you end up with a nice clean commit sequence in your codebase history and nobody ever has to think about branches or PR’s. (Obviously some larger tasks require branches but you’d be amazed how much you can live without them.)

Finding: Pull requests

Now that I’m out in the real world, it’s How Things Are Done. For good reasons. Doesn’t mean I have to like them. As evidence, I offer How to Rebase a Pull Request. Ewwww.

Finding: Coding tools

The last time I edited actual C code, nobody’d ever heard of Jetbrains and “VS Code” would have sounded like a mainframe thing. I found the back corner of my brain where those memories lived, shook it vigorously, and Emacs fell out. The thing I’m now using to type the text you’re now reading. Oh, yeah; that was then.

C code in Emacs in 2021

It’s 2021. No, really.

It worked fine. I mean, no autocomplete, but there was syntax coloring and indentation and whole cubic centimeters (probably) of brain cells woke up and remembered C. Dear reader, back in the day I wrote hundreds and hundreds of thousands of lines of the stuff, and I guess it doesn’t go away. In fact, the number of syntax errors was pretty well zero because the fingers just did the right thing.

Finding: The Mac as open-source platform

It’s not that great. Expat maintainer Sebastian Pipping quite properly drop-kicked my PR because it had coding-standards violations and a memory leak, revealed by the Travis CI setup. I lazily tried to avoid learning Travis and, with Sebastian’s help, figured out the shell incantations to run the CI. Only on the Mac they only sort of worked, and in particular Clang failed to spot the memory leak.

The best way to deal with this is probably to learn enough Docker (Docker Compose, probably) to make a fake Linux environment. I was well along the path to doing that when I realized I had a real Linux environment, namely tbray.org, the server sending you the HTML you are now reading.

(Except for it’s a Debian box that couldn’t do the clang-format coding-standards test but that’s OK, my Mac could manage after I used homebrew to install coreutils and moreutils and gnu-sed and various other handsome ecosystem fragments.)

I mean, I got it to go. But if I do it again, I’ll definitely wrestle Docker to the ground first. Which is irritating; this stuff should Just Work on a Mac. Without having a Homebrew dance party.

C

Well, yeah. We shouldn’t diss it too much, basically every useful online service you interact with is running on it. But after my -k option was added, clang found a memory leak in xmlwf. Which I tracked down and yeah, it was real, but it had also been there before my changes. And it wouldn’t be a problem in normal circumstances, until it suddenly was, and then you’d be unhappy. Which is why, in the fullness of time, most C should be replaced by Go (if you can tolerate garbage-collection latency) and Rust (if you can’t). Won’t happen in my lifetime.

Anyhow

Thanks to Sebastian, who was polite in the face of my repeated out-of-practice cluelessness. And hey, if you need to syntax-check huge numbers of XML files, your life just got a little easier.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconDecarbonization 19 Jan 2020, 9:00 pm

We’re trying to decarbonize our family as much as we can. We’re not kidding ourselves that this will move any global-warming needles. But sharing the story might, a little bit. [Updated mid-2021 with a bit of progress: No more gas vehicles, heat pumps in production.]

Those who worry a lot about the climate emergency, and who wonder what they might do about it, are advised to have a look at To fix Climate Change, stop being a techie and start being a human by Paul Johnston (until recently a co-worker). It’s coldly realistic, noting that the Climate Emergency is not your fault and that you can’t fix it. Only radical large-scale global political action will, and his recommendation is that you find an organization promoting such change that meets your fancy (there are lots, Paul provides a helpful list) and join it.

Global GHG per capita, 2017

Such intensity of change is possible. It happened in the middle of the Twentieth century when faced with the threat of global Fascism; Governments unceremoniously fixed wages, told businesses what they could and couldn’t do, and sent millions of young men off to die.

It’s not just possible, it’s inevitable that this level of effort will happen again, when the threat level becomes impossible to ignore. We will doubtless have to conscript en masse to fight floods and fires and plagues; which however is better than attacking positions defended by machine guns. The important thing is that it happen sooner rather than later.

Thus evangelism is probably the most important human activity just now, which is why people like Greta Thunberg are today the most important people in the world.

A modest proposal: Decarb

“Decarbonize” has four syllables and “decarbonization” six. I propose we replace both the noun and the verb with “decarb” which has only two. Mind you, it’s used in the cannabis community as shorthand for decarboxylation, but I bet the world’s stoners would be happy to donate their two syllables to the cause of saving the planet. Anyhow, I’m gonna start using decarb if only because of the typing it saves.

So, why personal decarb, then?

Well, it feels good. And — more important — it sends a message. At some point, if everybody knows somebody who’s decarbing it will help bring the earth’s defenders’ message home, making it real and immediate, not just another titillation in their Facebook feed.

There’s a corollary. Decarbing is, by and large, not terribly unpleasant. The things we need to give up are I think not strongly linked to happiness, and some decarb life choices turn out to be pretty pleasing.

Caveats and cautions

It’s important that I acknowledge that I’m a hyperoverentitled well-off healthy straight white male. Decarbing is gonna be easier for me than it is for most, because life is easier for me. I’m willing to throw money at decarb problems before that becomes economically sensible because I have the money to throw; but I hope (and, on the evidence, believe) that pretty well every one of these directions will become increasingly economically plausible across a wider spectrum of incomes and lifestyles.

Because of my privileged position and because the target is moving, I’m not going to talk much about the costs of the decarb steps. And I’m not even sure it’d be helpful; these things depend a lot on where in the world you live and when you start moving forward.

Now for a survey of decarb opportunities.

Decarb: Get gas out of your car

Obviously fossil fuels are a big part of the problem, and automobiles make it worse because they are so appallingly inefficient at turning the available joules into kilometers of travel — most less than 35%. On the other side of the coin, electric vehicles are a win because they don’t care what time of day or night you charge them up, so they’re a good match for renewable energy sources.

Family progress report: Good.

For good news check out my Jaguar diary; I smile a little bit every time I cruise past a gas station. I’m here to tell you that automotive decarb isn’t only righteous, it’s fun.

Jaguar I-Pace

Decarb: Get yourself out of cars

Last time I checked, the carbon load presented by a car is only half fuel, more or less; the rest is manufacturing. So we need to build fewer cars which means finding other ways to get places. Public transit and micromobility are the obvious alternatives.

They’re good alternatives too, if you can manage them. If you haven’t tried a modern e-bike yet you really owe that to yourself; it’s a life-changer. I think e-bikes are better alternatives than scooters along almost every axis: Safety, comfort, speed, weather-imperviousness.

And transit is fine too, but there are lots of places where it’s really not a credible option.

Now, there are times when you need to use a car. But not necessarily to own one. It isn’t rocket science: If we share cars then we’ll manufacture fewer. There are taxis and car-shares like Car2Go and friends. (Uber et al are just taxi companies with apps; and money-losing ones at that. Their big advantage is that your ride is partly paid for by venture-capital investors who are going to lose their money, so it’s not sustainable.) The car-share picture varies from place to place: Here in Vancouver we have Evo and Modo.

Family progress report: Pretty good.

Since I got my e-bike I’ve put three thousand km on it and I remain entirely convinced this is the future for a whole lot of people. I just don’t want to drive to work any more and resent it when family logistics force me to.

Trek E-Bike

My wife works from home and my son takes a bus or a skateboard to college. My daughter still gets driven to school occasionally in some combinations of lousy weather and an extra-heavy load, but uses her bike and the bus and will do so increasingly as the years go by.

I used to take the train quite a bit but the combination of Covid and leaving Amazon means it’s only rarely a good alternative to bicycling. Hmm. We use car-share a lot for going to concerts and so on because they avoid the parking hassle, and when we need a van to schlep stuff around for a couple of hours.

Now, due to the hyperoverentitledness noted above, we have the good fortune to live in a central part of the city and all these commute options are a half-hour or less. This part of decarb is way harder in the suburbs and that’s where many just have to live for purely economic reasons.

Decarb: Get fossil fuels out of the house

Houses run on some combination of electricity and fossil fuels, mostly natural gas but some heating oil. The biggest win in this space is what Amory Lovins memorably christened negawatts — energy saved just by eliminating wastage. In practical terms, this means insulating your house so it’s easier to heat and/or cool, and switching out incandescent lights for modern LEDs.

Ubiquitous LEDs are a new thing but they’re not the only new thing. For heating and cooling, heat pumps are increasingly attractive. Their performance advantage over traditional furnaces is complicated. Wikipedia says the proper measure is coefficient of performance (COP), the ratio of useful heat movement per work input. Obviously a traditional furnace can’t do better than 1.0; a heat pump’s COP is 3 or 4 at a moderate temperature like 10°C and decreases to 1.0 at around 0°C. So this is a win in a place like Vancouver but maybe not if you’re in Minnesota or Saskatchewan.

Heat pump technology also works for your hot-water tank.

Another new-ish tech, popular in parts of Europe for some time now, is induction cooking. It’s more efficient than gas cooking, which is in turn more efficient than a traditional electric cooktop. For those of you who abandoned electric for gas years ago because it was more responsive, think again: Induction reacts just as fast as gas and boils water way faster.

Note that all these technologies are electric, so your decarb advantage depends on how clean your local power is. Here in the Pacific Northwest where it’s mostly hydroelectric, it’s a no-brainer. But even if you’ve got relatively dirty power the efficiency advantage of the latest tech might put you ahead on the carbon count.

Also bear in mind that your local electricity will likely be getting cleaner. If you track large-scale energy economics, the cost of renewables, even without subsidies, even with the need for storage, has fallen to the point where it not only makes sense to use it for new demand, in some jurisdictions it makes sense to shut down high-carbon generating infrastructure in favor of newer better alternatives.

Family progress report: Good, but…

We had a twenty-year-old gas range which was starting to scare us from time to time, for example when the broiler came on with a boom that rattled the windows. So early this year we retired it in favor of a GE Café range, model CCHS900P2MS1. It makes the old gas stove feel like cave-man technology. Quieter, just as fast, insanely easier to clean, and safer.

On the other hand, it requires cookware with significant ferrous content, which yours maybe doesn’t have; we had to replace a few of ours. And one thing that really hurts: It doesn’t work with a wok so we stir-fry a whole lot less.

GE Café CCHS900P2MS1 Induction Range

Modern induction range with a traditional cast-iron frying pan, making pancakes.

Now, as for those negawatts: We live in a wooden Arts-and-Crafts style house built in 1912 and have successively updated the insulation and doors and windows here and there over the years to the point where it’s a whole lot more efficient than it was. Late last year we went after the last poorly-insulated corner and found ourselves spending several thousand dollars on asbestos remediation.

Also in early 2020 we installed a Mitsubishi heat pump and a heat-pump-based hot-water tank. This turns out (in 2021 in Western Canada) to be quite a bit more expensive than heating the house with gas was. Also, modern furnaces (compared to the decades-old boxes they replaced) want to pump more air and pump it gently. There is one bedroom that we just couldn’t get properly ducted without a major house rebuild and it’s kind of cold now.

Another downside is that the tech isn’t fully debugged. The thermostat that came with the thing is really primitive and we haven’t been able to make a more modern one work with it.

Having said that, when the furnace is on, it’s a whole lot quieter and gentler than what it replaced.

One consequence is that we can turned off the natural gas coming into the house, which makes us feel good on the decarb front. And then there’s the earthquake issue; Where we live we’re overdue for The Big One and when it comes, if your house doesn’t fall down on you, another bad outcome is a natural-gas leak blowing you to hell.

Decarb: Fly less

Yeah, the carbon load from air travel is bad. The first few electric airplanes are starting to appear; but no-one believes in electric long-haul flights any time soon.

Family progress report: bad, used to be worse.

For decades, as the public face of various technologies, I was an egregious sinner, always jetting off to this conference or that customer, always with the platinum frequent-flyer card, often upgraded to biz class.

Since I parted ways with Google in 2014 and retreated into back-room engineering then retirement, things have been better. But not perfect; we have family in New Zealand and Saskatchewan, and take to the air at least annually. I haven’t any notion what proportion of air-flight carbon has business upstream, what proportion vacations, and what proportion love. I hope the planet can afford the latter and a few vacations.

Of course, Covid slaughtered biz travel and we don’t know yet how much will come back. I can’t imagine that anyone thinks it’ll go 100% back to the way it was, thank goodness.

Decarb: Eat less meat

The numbers for agriculture’s share of global carbon load are all over the place; I see figures from 9% to twice that. The UN Food and Agricultural Organization says global livestock alone accounts for 14.5% of all anthropogenic GHG emissions. So if the human population backed away from eating dead animals, it’d move the needle.

Family progress report: Not that great but improving.

We eat more vegetarian food from year to year, especially now that my son cooks once or twice a week and generally declines to include meat in what he makes. I make progress but slower, not so much because meat is tasty and nutritionally useful but because it’s easy; less chopping and other prep required. In a culture that’s chronically starved for time, this is going to be a heavy lift.

Also we’ve cut way back on beef, which is the worst single part of the puzzle. I think it’s probably perfectly OK for a human to enjoy a steak or a burger; say, once per month.

Decarb: Random sins

You probably shouldn’t have a motorboat (which we do) and if you do you should use it sparingly (which we do).

You probably shouldn’t burn wood in a fireplace (which we do) and if you do you should use it sparingly (which we do).

[To be updated].

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

Page processed in 1.259 seconds.

Powered by SimplePie 1.2.1-dev, Build 20110721192037. Run the SimplePie Compatibility Test. SimplePie is © 2004–2021, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.