Warning: include_once(../idn/idna_convert.class.php) [function.include-once]: failed to open stream: No such file or directory in /demo/index.php on line 9

Warning: include_once() [function.include]: Failed opening '../idn/idna_convert.class.php' for inclusion (include_path='.:/:/usr/local/php5/lib/pear') in /demo/index.php on line 9

Warning: ./cache is not writeable. Make sure you've set the correct relative or absolute path, and that the location is server-writable. in /simplepie.inc on line 1780
SimplePie: Demo

ongoing by Tim Bray

ongoing fragmented essay by Tim Bray

FaviconMore Topfew Fun 1 Jul 2020, 9:00 pm

Back in May I wrote a little command-line utility called Topfew (GitHub). It was fun to write, and faster than the shell incantation it replaced. Dirkjan Ochtman dropped in a comment noting that he’d written Topfew along the same lines but in in Rust (GitHub) and that it was 2.86 times faster; at GitHub, the README now says that with a few optimizations it’s now 6.7x faster. I found this puzzling and annoying so I did some optimization too, encountering surprises along the way. You already know whether you’re the kind of person who wants to read the rest of this.

Reminder: What topfew does is replace the sort | uniq -c | sort -rn | head pipeline that you use to do things like find the most popular API call or API caller by plowing through a logfile.

Initial numbers

I ran these tests against a sample of the ongoing Apache access_log, around 2.2GB containing 8,699,657 lines. In each case, I looked at field number 7, which for most entries gives you the URL being fetched. Each number is the average of three runs following a warm-up run. (The variation between runs was very minor.)

ElapsedUserSystemvs Rust
Rust4.964.450.501.0
ls/uniq/sort142.80149.661.2928.80
topfew 1.027.2028.310.545.49

Yep, the Rust code (Surprise #1) measures over five times faster than the Go. In this task, the computation is trivial and the whole thing should be pretty well 100% IO-limited. Granted, the Mac I’m running this on has 32G of RAM so after the first run the code is almost certainly coming straight out of RAM cache.

But still, the data access should dominate the compute. I’ve heard plenty of talk about how fast Rust code runs, so I’d be unsurprised if its user CPU was smaller. But elapsed time!?

What Dirkjan did

I glanced over his code. Gripe: Compared to other modern languages, I find Rust suffers from a significant readability gap, and for me, that’s a big deal. But I understand the appeal of memory-safety and performance.

Anyhow, Dirkjan, with some help from others, had added a few optimizations:

  1. Changed the regex from \s+ to [ \t].

  2. The algorithm has a simple hash-map of keys to occurrence counts. I’d originally done this as just string-to-number. He made it string-to-pointer-to-number, so after you find it in the hash table, you don’t have to update the hash-map, you just increment the pointed-to value.

  3. Broke the file up into segments and read them in separate threads, in an attempt to parallelize the I/O.

I’d already thought of #3 based my intuition that this task is I/O-bound, but not #2, even though it’s obvious.

What I did

In his comment, Dirkjan pointed out correctly that I’d built my binary “naively with ‘go build’”. So I went looking for how to make builds that are super-optimized for production and (Surprise #2) there aren’t any. (There’s a way to turn optimizations off to get more deterministic behavior.) Now that I think about it I guess this is good; make all the optimizations safe and just do them, don’t make the developer ask for them.

(Alternatively, maybe there are techniques for optimizing the build, but some combination of poor documentation or me not being smart enough led to me not finding them.)

Next, I tried switching in the simpler regex, as in Dirkjan’s optimization #1.

ElapsedUserSystemvs Rust
Rust4.964.450.501.0
ls/uniq/sort142.80149.661.2928.80
topfew 1.027.2028.310.545.49
“[ \\t]”25.0326.180.53 5.05

That’s a little better. Hmm, now I’m feeling regex anxiety.

Breakthrough

At this point I fat-fingered one of the runs to select on the first rather than the seventh field and that really made topfew run a lot faster. Which strengthened my growing suspicion that I was spending a lot of my time selecting the field out of the record.

At this point I googled “golang regexp slow” and got lots of hits; there is indeed a widely expressed opinion that Go’s regexp implementation is far from the fastest. Your attitude to this probably depends on your attitude toward using regular expressions at all, particularly in performance-critical code.

So I tossed out the regexp and hand-wrangled a primitive brute-force field finder. Which by the way runs on byte-array slices rather than strings, thus skipping at least one conversion step. The code ain’t pretty but passed the unit tests and look what happened!

ElapsedUserSystemvs Rust
Rust4.964.450.501.0
ls/uniq/sort142.80149.661.2928.80
topfew 1.027.2028.310.545.49
“[ \\t]”25.0326.180.535.05
no regexp4.234.830.660.85

There are a few things to note here. Most obviously (Surprise #3), the Go implementation now has less elapsed time. So yeah, the regexp performance was sufficiently bad to make this process CPU-limited.

Less obviously, in the Rust implementation the user CPU time is less than the elapsed; user+system CPU are almost identical to elapsed, all of which suggests it’s really I/O limited. Whereas in all the different Go permutations, the user CPU exceeds the elapsed. So there’s some concurrency happening in there somewhere even though my code was all single-threaded.

I’m wondering if Go’s sequential file-reading code is doing multi-threaded buffer-shuffling voodoo. It seems highly likely, since that could explain the Go implementation’s smaller elapsed time on what seems like an I/O-limited problem.

[Update: Dirkjan, after reading this, also introduced regex avoidance, but reports that it didn’t appear to speed up the program. Interesting.]

Non-breakthroughs

At this point I was pretty happy, but not ready to give up, so I implemented Dirkjan’s optimization #2 - storing pointers to counts to allow increments without hash-table updates.

ElapsedUserSystemvs Rust
Rust4.964.450.501.0
ls/uniq/sort142.80149.661.2928.80
topfew 1.027.2028.310.545.49
“[ \\t]”25.0326.180.535.05
no regexp4.234.830.660.85
hash pointers4.165.100.650.84

Um well that wasn’t (Surprise #4) what I expected. Avoiding almost nine million hash-table updates had almost no observable effect on latency, while slightly increasing user CPU. At one level, since there’s evidence the code is limited by I/O throughput, the lack of significant change in elapsed time shouldn’t be too surprising. The increase in user CPU is though; possibly it’s just a measurement anomaly?

Well, I thought, if I’m I/O-limited in the filesystem, let’s be a little more modern and instead of reading the file, let’s just pull it into virtual memory with mmap(2). So it should be the VM paging code getting the data, and everyone knows that’s faster, right?

ElapsedUserSystemvs Rust
Rust4.964.450.501.0
ls/uniq/sort142.80149.661.2928.80
topfew 1.027.2028.310.545.49
“[ \\t]”25.0326.180.535.05
no regexp4.234.830.660.85
hash pointers4.165.100.650.84
mmap7.178.330.621.45

So by my math, that’s (Surprise #5) 72% slower. I am actually surprised, because if the paging code just calls through to the filesystem, it ought to be a pretty thin layer and not slow things down that much. I have three hypotheses: First of all, Go’s runtime maybe does some sort of super-intelligent buffer wrangling to make the important special case of sequential filesystem I/O run fast. Second, the Go mmap library I picked more or less at random is not that great. Third, the underlying MacOS mmap(2) implementation might be the choke point. More than one of these could be true.

A future research task is to spin up a muscular EC2 instance and run a few of these comparisons there to see how a more typical server-side Linux box would fare.

Parallel?

Finally, I thought about Dirkjan’s parallel I/O implementation. But I decided not to go there, for two reasons. First of all, I didn’t think it would actually be real-world useful, because it requires having a file to seek around in, and most times I’m running this kind of a pipe I’ve got a grep or something in front of it to pull out the lines I’m interested in. Second, the Rust program that does this is already acting I/O limited, while the Go program that doesn’t is getting there faster. So it seems unlikely there’s much upside.

But hey, Go makes concurrency easy, so I refactored slightly. First, I wrapped a mutex around the hash-table access. Then I put the code that extracts the key and calls the counter in a goroutine, throwing a high-fanout high-TPS concurrency problem at the Go runtime without much prior thought. Here’s what happened.

ElapsedUserSystemvs Rust
Rust4.964.450.501.0
ls/uniq/sort142.80149.661.2928.80
topfew 1.027.2028.310.545.49
“[ \\t]”25.0326.180.535.05
no regexp4.234.830.660.85
hash pointers4.165.100.650.84
goroutines!6.5357.847.681.32

(Note: I turned the mmap off before I turned the goroutines on.)

I’m going to say Surprise #6 at this point even though this was a dumbass brute-force move that I probably wouldn’t consider doing in a normal state of mind. But wow, look at that user time; my Mac claims to be an 8-core machine but I was getting more than 8 times as many CPU-seconds as clock seconds. Uh huh. Pity those poor little mutexes.

I guess a saner approach would be to call runtime.NumCPU() and limit the concurrency to that value, probably by using channels. But once again, given that this smells I/O-limited, the pain and overhead of introducing parallel processing seem unlikely to be cost-effective.

Or, maybe I’m just too lazy.

Lessons

First of all, Rust is not magically and universally faster than Go. Second, using regular expressions can lead to nasty surprises (but we already knew that, didn’t we? DIDN’T WE?!) Third, efficient file handling matters.

Thanks to Dirkjan! I’ll be interested to see if someone else finds a way to push this further forward. And I’ll report back any Linux testing results.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconOboe to Ida 26 Jun 2020, 9:00 pm

What happened was, I was listening to an (at least) fifty-year old LP, a classical collection entitled The Virtuoso Oboe; it’s what the title says. The music was nothing to write home about but led into a maze of twisty little passages that ended with me admiring the life of a glamorous Russian bisexual Jewish ballerina heiress who’s famous for… well, we’ll get to that.

The uncollected Les Brown

Regular readers know that I have a background task — working through an inherited trove of 900 mostly-classical LPs. In that piece I said I was going to blog my way through the collection but I haven’t, which is probably OK. But tonight’s an exception.

Oboe virtuosity

As the picture linked above reveals, the records are in cardboard liquor boxes, and somewhat clustered thematically. The box I’m currently traversing seems to be mostly “weird old shit, mostly bad, mostly scratchy”. It contained some malodorous “easy listening” products of the Fifties and Sixties. (I kept one, 1944-46 recordings by Les Brown’s extremely white jazz band, forgettable except it has Doris Day on vocals and let me tell ya, she sets the place on fire. But I digress.)

The Virtuoso Oboe, Vol. 1

Anyhow, last night’s handful of LPs to wash and dry and try included The Virtuoso Oboe and The Virtuoso Oboe Vol. 4, whose existence suggests the series had legs. It features that famous household name André Lardrot on, well, oboe.

The first record contained music from Cimarosa, Albinoni, Handel, and Haydn and frankly didn’t do anything for me. None of the tunes were that memorable and I didn’t find the oboe-playing compelling. So it’s going off to the Sally Ann.

I questioned whether Vol. 4 was even worth trying, but then I noticed it had a concerto by Antonio Salieri. Who’s got Salieri in their collection, I ask you? Nobody, that’s who! But now I do.

The Virtuoso Oboe, Vol. 4

The concerto is nice; if you told me it was by Mozart I’d believe you. A couple of the melodies are very strong and skip opportunities for convolutions and decorations that W. Amadeus never would have.

On the other side was a Concertino for English Horn by Donizetti. I vaguely remembered that the English Horn (a.k.a. cor anglais) is sort of like an oboe only bigger, and notably neither English nor a horn.

Oboe d’Amore

While I was reading up on that I discovered the Oboe d’amore, which is bigger than an oboe but smaller than a cor anglais. Who could resist learning more about a love oboe? Whereupon I learned that it’s perhaps most famous for leading one of the stanzas in Ravel’s Boléro. At which point I was about to stop, because what more can be said about the Boléro?

Well, lots! I’d somehow internalized it as fact that Ravel hated his most famous work and regretted writing it, but not so. He attended performances and produced a later arrangement for two pianos. He also expressed the perfectly reasonable opinion that once he’d done it, nobody else needed to write any pieces consisting of a single melodic line repeated over and over again for seventeen minutes and successively enhanced by adding more and more instruments playing louder and louder.

Ida Rubinstein

He also thought it should be played at a steady pace and got into a big public pissing match with Toscanini, who wanted to play the piece not only louder and louder but faster and faster.

I was thinking about the Oboe d’Amore and Boléro and speculated that might be a joke to be found along the lines of oBo d’Erek but you’d have to be over fifty to follow the thread and anyhow I was wrong, there isn’t a joke there.

Ida Rubinstein

Wait, there’s more! It turns out that Ravel didn’t write the Boléro just because he felt like it, but because he was paid for it; a commission from one Ida Rubinstein, mentioned at the top: dancer, Russian, heiress, bisexual, etc. The reason I wrote this was to get you to hop over to Wikipedia and read up about her life, which was extraordinary. Having done that, you can actually watch bad silent film of her dancing, someone having supplied a music track. It’s a pity there’s no video of her performing to Boléro.

My work here is done.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconBreak Up Google 25 Jun 2020, 9:00 pm

It’s easy to say “Break up Big Tech companies!” Depending how politics unfold, the thing might become possible, but figuring out the details will be hard. I spent the last sixteen years of my life working for Big Tech and have educated opinions on the subject. Today: Why and how we should break up Google.

egGolo

Where’s the money?

Google’s published financials are annoyingly opaque. They break out a few segments but (unlike Amazon) only by revenue, there’s nothing about profitability. Still, the distribution is interesting. I collated the percentages back to Q1 2018:

Ads on GoogleAds off GoogleAds on YouTubeCloudOther
2018 Q170.97%14.84%??14.19%
2018 Q271.69%14.77%??13.54%
2018 Q371.73%14.58%??13.69%
2018 Q469.05%14.32%??16.62%
2019 Q170.99%13.81%??14.92%
2019 Q270.36%13.66%??15.98%
2019 Q370.97%13.15%??15.88%
2019 Q459.39%13.10%10.26%5.68%11.57%
2020 Q159.76%12.68%9.76%6.83%10.73%

Note that they started breaking out YouTube and Cloud last year; it looks like Cloud was previously hidden in “Other” and YouTube in “Ads on Google”. I wonder what else is hidden in there?

Why break it up?

There are specific problems; I list a few below. But here’s the big one: For many years, the astonishing torrent of money thrown off by Google’s Web-search monopoly has fueled invasions of multiple other segments, enabling Google to bat aside rivals who might have brought better experiences to billions of lives.

  1. Google Apps and Google Maps are both huge presences in the tech economy. Are they paying for themselves, or are they using search advertising revenue as rocket fuel? Nobody outside Google knows.

    In particular, I’m curious about Gmail, which had 1.5B users in 2018. Some of those people see ads, but plenty don’t. So what’s going on there? It can’t be that cheap to run. Where’s the money?!

  2. The maps business, with its reviews and ads, has a built-in monopolization risk that I wrote about in 2017. It needs to be peeled off so we can think about it. We definitely want reviews and ads on maps (“Where’s the nearest walk-in clinic open on Sunday with friendly staff?”), but the potential for destructive corruption is crazy high.

    Used to be, there were multiple competing map producers and some of them were governments. The notion of mapping being a public utility (Perhaps multinational? What a concept!) with competing ad vendors running on it doesn’t sound crazy to me.

  3. The online advertising business has become a Facebook/Google duopoly which is destroying ad-supported publishing and thus intellectually impoverishing the whole population; not to mention putting a lot of fine journalists out of work. The best explanation I’ve ever read of how that works is Data Lords: The Real Story of Big Data, Facebook and the Future of News by Josh Marshall.

  4. The world needs Google Cloud to be viable because it needs more than two public-cloud providers. It’s empirically possible for such a business to make lots of money; AWS does. GCloud needs to be ripped away from its corporate parent, just as AWS does, but for different reasons.

  5. I note the pointed absence of Android in any of the financials. It’s deeply weird that the world’s most popular operating system has costs and revenues that nobody actually knows. Of course, the real reason Android exists is that Google needs mobile advertising not to become an Apple monopoly.

  6. YouTube has become the visual voice of several generations and is too important to leave hidden inside an opaque conglomerate. Is it a money-spinner or strictly a traffic play? Nobody knows. What we do know is that people who try to make a living as YouTubers sure do complain a lot about arbitrary, ham-handed changes of monetization and content policy. Simultaneously, Google’s efforts to avoid promoting creeps and malefactors aren’t working very well.

What to do?

First, spin Advertising off into its own company and then enact aggressive privacy-protection and anti-tracking law. Start by doing away with 100% of third-party cookies, no exceptions. It’d probably be OK to leave the off-site advertising in that company.

But YouTube definitely has to be its own thing; it’s got no real synergy that I can detect with any other Google property.

I’m not even sure Android is a business. Its direct cost is a few buildings full of engineers. Its revenue is (indirectly) mobile ads, plus Play Store commissions, plus Pixel sales plus, uh, well nobody knows what kinds of backroom arrangements are in place with Samsung et al. Absent the mobile ads, I doubt it’s much of a money-maker. Maybe turn it into a foundation funded by a small levy on everyone who ships an Android phone… Other ideas?

What to do with Maps isn’t obvious to me. It’s probably a big-money business (but we don’t know). In combination with the reviews capability it should be a great advertising platform, but the opportunities for corruption are so huge I’m not sure any private business could be trusted to manage them. First step: Force Google to disclose the financials.

I think Google Cloud could probably make a go of it as an indie, if only as a vendor of infrastructure to all the other ex-Google properties. And I think the product is good, although they’re running third in the public-cloud race.

To increase Google Cloud’s chances, throw in the Apps business; Microsoft classifies their equivalent as Cloud and I don’t think that’s crazy. My Microsoft-leaning friends scoff at the G Apps, but they’re just wrong; with competent administration the apps offer a slick, fast, high-productivity office-automation experience.

Finally, as a standalone company, we could hope they’d break Google’s habit of suddenly killing products heavily depended-on by customers. You just can’t do that in the Enterprise space.

The politics

So there should be at least four Google-successor organizations, each with a chance for major success.

I think this would be pretty easy to sell to the public. To start with, what’s left of the world’s press would cheerlead, eager to get out from under the thumb of the Google/Facebook ad cartel. Legions of YouTubers would march in support as well.

Financially, I think Google’s whole is worth less than the sum of its parts. So a breakup might be a win for shareholders. This is a reasonable assumption if only because the fountain of money thrown off by Web-search advertising leaves a lot of room for laziness and mistakes in other sectors of the business.

Also, it’s quite likely the ex-Googles could come out ahead on the leadership front. Larry, Sergey, and the first wave of people they hired made brilliant moves in building Web search and then the advertising business. But the leadership seems to have lost some of that golden touch; fresh blood might really help.

When?

The best time would have been sometime around 2015. The second best…

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconA-Cloud PR/FAQ 21 Jun 2020, 9:00 pm

I’d like to see AWS split off from the rest of Amazon and I’m pretty sure I’m not alone. So to help that happen, I’ve drafted a PR/FAQ and posted it on GitHub so that it can be improved. People who know what a PR/FAQ is and why this might be helpful can hop on over and critique the doc. For the rest, herewith background and explanation on the what, why, and how.

A project needs a name and so does a company. For the purposes of this draft I’m using “A-Cloud” for both.

Why spin off AWS?

The beating of the antitrust drums is getting pretty loud across a widening swathe of political and economic conversations. The antitrust guns are particularly aimed at Big Tech. Whom I’m not convinced are the most egregious monopolists out there (consider beer, high-speed Internet, and eyeglasses) but they’re maybe the richest and touch more people’s lives more directly.

So Amazon might prefer to spin off AWS proactively, as opposed to under hostile pressure from Washington.

But that’s not the only reason to do it. The cover story in last week’s Economist, Can Amazon keep growing like a youthful startup? spilled plenty of ink on the prospects for an AWS future outside of Amazon. I’ll excerpt one paragraph:

AWS has the resources to defend its market-leading position. But in the cloud wars any handicap could cost it dearly. Its parent may be becoming one such drag. For years being part of Amazon was a huge advantage for AWS, says Heath Terry of Goldman Sachs, a bank. It needed cash from the rest of the group, as well as technology and data. But Mr Bezos’s habit of moving into new industries means that there are now ever more rivals leery of giving their data to it. Potential customers worry that buying services from AWS is tantamount to paying a land-grabber to invade your ranch. Walmart has told its tech suppliers to steer clear of AWS. Boards of firms in industries which Amazon may eye next have directed their it departments “to avoid the use of AWS where possible”, according to Gartner.

The Economist plausibly suggests a valuation of $500B for AWS. Anyhow, the spin-off feels like a complete no-brainer to me.

Now, everyone knows that at Amazon, when you want to drive a serious decision, someone needs to write, polish, and bring forward a six-pager, usually a PR/FAQ. So I started that process.

What’s a PR/FAQ?

It’s a document, six pages plus appendices, that is the most common tool used at Amazon to support making important decisions. There’s no need for me to explain why this is works so well; Brad Porter did a stellar job back in 2015. More recently, Robert Munro went into a little more detail.

On GitHub?

One thing neither of those write-ups really emphasize is that six-pagers in general, and PR/FAQs in particular, are collaborative documents. The ones that matter have input from many people and, by the time you get to the Big Read in the Big Room with the Big Boss, have had a whole lot of revisions. That’s why I put this one on GitHub.

I feel reasonably competent at this, because there are now several successful AWS services in production where I was an initial author or significant contributor to the PR/FAQ. But I know two things: First, there are lots of people out there who are more accomplished than me. A lot of them work at Amazon and have “Product Manager” in their title. Second, these docs get better with input from multiple smart people.

So if you feel qualified, think AWS should be spun out, and would like to improve the document, please fire away. The most obvious way would be with a pull request or new issue, but that requires a GitHub account, which probably has your name on it. If you don’t feel comfortable contributing in public to this project, you could make a burner GitHub account. Or if you really don’t want to, email me diffs or suggestions. But seriously, anyone who’s qualified to improve this should be able to wrangle a pull request or file an issue.

What’s needed?

Since only one human has read this so far, there are guaranteed to be infelicities and generally dumb shit. In particular, the FAQ is not nearly big enough; there need to be more really hard questions with really good answers.

Also, appendices are needed. The most important one would be “Appendix C”, mentioned in the draft but not yet drafted. It covers the projected financial effects of spinning out A-Cloud. I’d love it if someone financially savvy would wrap something around plausible numbers.

Other likely appendices would be a description of the A-Cloud financial transaction, and (especially) a go-to-market plan for the new corporation: How is this presented to the world in a way that makes sense and is compelling?

Document read?

Before these things end up in front of Andy Jassy, there are typically a few preparatory document reads where intermediate-level leaders and related parties get a chance to spend 20-30 minutes reading the document then chime in with comments.

Given enough requests, I’d be happy to organize such a thing and host it on (of course) Amazon Chime, which really isn’t bad. Say so if you think so.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconAWS’s Share of Amazon’s Profit 14 Jun 2020, 9:00 pm

I’ve often heard it said that AWS is a major contributor to Amazon’s profitability. How major? Here’s a graph covering the last nine quarters, with a brief methodology discussion:

AWS Proportion of Amazon Profit, Quarterly since 2018 Q1

I think this is a useful number to observe as time goes by, and it would be good if we generally agree on how to compute it. So here’s how I did it.

Start at the Quarterly Results page at ir.amazon.com. It only has data back through 2018 which seems like enough history for this purpose. Let’s take a walk through, for example, the Q1 2020 numbers.

Please note: I have zero inside knowledge about the facts behind these numbers. Every word here is based on publicly available data.

It’s easy to identify the AWS number; search in the quarterly for “Segment Information” and there it is: Net sales $10,219M, Operating expenses $7,144M, Operating Income $3,075M. So if we want to know the proportion of Amazon profits that are due to AWS, that’d be the numerator.

So what’s the denominator? Since the AWS results exclude tax, so should the Amazon number. Search for “Consolidated Statements of Operations” and there are two candidates: “Operating Income” and “Income before income taxes”. The first has the same name as the AWS line, so it’s a plausible candidate. But it excludes net interest charges and $239M of unspecified “Other income”.

This is a little weird, since interest is a regular cost of doing business. It might be the case that the interest expense is mostly due to financing computers and data centers for AWS, or alternatively that it’s mostly financing new fulfillment centers and trucks and planes and bulk buys of toilet paper. And then there’s the “Other income”.

If we assume that the “Other income” and net interest gain/loss is distributed evenly across AWS and the rest of Amazon, then you should use the “Operating” line. I’m not 100% comfy with that assertion, so I made a graph with both lines, and I think it’s likely the best estimate of AWS’s proportion of Amazon’s income falls in the space between them.

So, if this graph is right, then over the last couple of years, somewhere between 50% and 80% of Amazon’s profit has been due to AWS. Glancing at the graph, you might think the trend is up, but that verges on extrapolation, I’d want to wait a couple more quarters before having an opinion.

Since I’m not a finance professional, I may have gone off the rails in some stupidly obvious way. So here’s the .xlsx I used to generate the graph. I used Google Sheets, so I hope it doesn’t break your Excel.

Please let me know if you find any problems.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconAnti-Monopoly Thinking 8 Jun 2020, 9:00 pm

I recently read The Myth of Capitalism: Monopolies and the Death of Competition by Jonathan Tepper and Denise Hearn. For a decade or two now I’ve felt the economy is over-concentrated and in recent years I’ve been super-concerned about concentration of market power in the Big-Tech sector where I’ve earned my living. Reading this book has reinforced that concern and been very helpful in introducing new angles on how to think about monopoly. The issue is central to the travails and triumphs of 21st-century Capitalism and if you care about that, you might want to read it too.

The Myth of Capitalism

Where I stand

I believe that de-monopolization is a necessary but not sufficient condition for getting us out of our current socioeconomic turmoil. In particular, I’d go after Big Tech. The functions that are provided by Amazon, Apple, Facebook, Google, and Microsoft should be provided by at least twenty different companies.

I hope to dig into a few of them, with specifics, in a series of blog posts, given time and energy.

A few juicy outtakes

I don’t necessarily agree 100% with all of these, but they are thought-provoking.

“X companies control Y% of the US market in Z.”

  • X=2, Y=90, Z=beer

  • X=4, Y=almost all, Z=airlines

  • X=5, Y=50, Z=banks

  • X=2, Y=90, Z=health insurers in many states

  • X=1, Y=75%, Z=fast Internet, most places in the US

  • X=3, Y=70, Z=pesticides

  • X=3, Y=80, Z=seed corn.

“[Warren Buffet,] when asked at an annual meeting what his ideal business was, he argued it was one that had ‘High pricing power, a monopoly.’”

“Given the lack of any new entrants into most industries, it should be no surprise that companies are getting larger and older. The average age of public companies in the United States is currently 18 years old, up from 12 years old in 1996. In real terms, the average company in the economy has become three times larger during the past two decades.25 Not only do we have fewer, older companies, but they are also capturing almost all the profits. In 1995 the top 100 companies accounted for 53% of all income from publicly traded firms, but by 2015, they captured a whopping 84% of all profits.26 Like Oliver Twist asking for more, there is little left for smaller companies after the big ones eat their fill.”

“The tech giants love startups, but in the same way that lions love feasting on lifeless carcasses of gazelles.”

“Since Reagan, no president has enforced the spirit or the letter of the Sherman and Clayton Acts. It doesn't matter what party has controlled Congress or the presidency, there has been no difference in policy toward industrial concentration. In fact, the budgets for antitrust enforcement have steadily shrunk with each passing president.”

“High markups matter a great deal in the inequality debate because they are tightly correlated with lower wages for workers.”

“Monopolies – not big businesses – are the enemy of competition. Big is neither beautiful nor ugly.”

“The easiest rule of thumb is that industries of fewer than six players should not be allowed to merge.”

Useful: Numbers and stories

In the first part of the book Tepper and Hearn establish that the economy is over-concentrated, demonstrating it with lots of facts and figures. There is a wealth of anecodotal case studies illustrating the practical ill-effects of this concentration.

Useful: Big-tech drill-down

They dive especially deep on concentration in Big Tech: Amazon’s retail, Google’s search, Facebook’s demographic sorting. Little of this was news to me, but for those who haven’t spent decades in the tech trenches, or who harbor any remaining illusions that this business is on the side of the little guy, the stories are valuable.

Useful: The German narrative

After WWII, it turns out that the Allies felt the massive pre-war concentration of the German economy had been a significant factor in the rise of the Nazis. So in the reconstruction of Germany, they made a deliberate attempt to keep that sort of monopolization from happening again. I don’t know if they were right about the rise of the Nazis, but history seems to have shown them right about designing economies; this economic structure has served Germany very well.

Useful: Post-consumerism

In recent times, the US has as a matter of policy gated all antitrust litigation on a single factor: Are consumer prices raised unduly? Tepper and Hearn argue convincingly that this is wrong on many levels. One of them is ethical: Humans are more than just consumers, and undue monopolization damages the human experience of life along many non-consumer axes.

Boring: Patents and trademarks

Yes, the modern IP regime is damaging and dysfunctional, but we already knew that, and this is not really essential to understanding monopolization. This chapter should just be dropped.

Useful: Regulation and lobbying

Most progressives, including me, are big fans of public regulation of businesses. Unfortunately, there is a strong case to be made that monopolies use regulatory arbitrage to crush potential competition and retain their steely grip. This is facilitated by massive lobbying efforts, the de-facto corruption that infests more or less all the capitals of the developed world.

Not only can you lobby for a regulatory structure that favors your company’s position, at some level a high volume of regulation becomes a qualitative advantage for big companies against small ones, because they can afford to hire the lawyers and compliance people to play the game, while startups can’t.

Useful: Cross-shareholding

When a small group of investors are investors in all the leading players in any given industry, this increases everyone’s incentive to carve the space up in a nice little orderly monopoly without any of that nasty profit-destroying competition. Evidence is not scarce.

Wrong: On Piketty

Tepper and Hearn go on a side-trip to explain why Thomas Piketty was wrong, but bizarrely begin by asserting “We have yet to meet anyone who has read the entire book. We're not making that up. Professor Jordan Ellenberg, a mathematics professor, did a study of bookmarks on Kindle e-books and found that almost no one made it past 26 pages in Piketty's book.” Um, wrong, I did. And, with the exception of the final section where Piketty introduces his proposed solutions to the problem, it’s not even heavy going.

I assume Tepper and Hearn’s “anyone” includes Tepper and Hearn, so I think it’s fair to pretty well blow off their discursion on growth and inequality, since to begin with their assertion about what Piketty claims is nowhere near what I encountered when I read it. Anyhow, no biggie, it’s just one little section. And then they go on to agree with Piketty’s finding that inequality has empirically been increasing continuously and strongly, and provide a bunch of useful data graphs.

Useful: Recommendations

Tepper and Hearn finish up with a series of how-do-we-fix-this recommendations. I’m not going to copy that many of them in here because if you care, you should read their damn book already.

But they make a crucial meta-point: Anti-monopoly regulations need to be clear and simple, and based on principles, as opposed to complex rules.

Next steps

Simple: Let’s start by breaking up a few banks and big-techs and agribusiness empires. This book probably isn’t the reason why, but public perception has swung strongly against The BigCos. At this point in history they’re soft targets and the rewards to all of us from doing this are potentially huge. (By the way, this includes their shareholders.)

The best time to have done this would have been ten years ago. The next best time is now.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconAt CRAB Park 31 May 2020, 9:00 pm

On an errand, we felt the need to be outside near the sea. The closest opportunity was CRAB Park (see here). We hadn’t planned on visiting a homeless encampment but did that too. I seriously recommend the experience. Also I got pleasant pictures to accompany the story.

CRAB is the nearest green waterfront to Vancouver’s Downtown Eastside which means some of the people you see have been considerably damaged by what we call “civilization”, and show it. it’s also real close to the Port of Vancouver, third-largest in North America and biggest in Canada (only 29th in the world). Thus these views:

Vancouver Port from CRAB park

Some parts of the port seem remarkably ad-hoc.

Marine engineering near CRAB park

Homelessness

I saw a cluster of tents in an adjoining parking lot. In Vancouver that means homeless people, which is a little scary to a nice-neighborhood greybeard like me even without Covid-19 maybe lurking where you’re not looking. But this one had a fire burning and signs on display. Feeling an open-for-visitors vibe, I wandered in.

The signs said “reconciliation is dead”. The fire was cedar, an Indigenous woman by it explaining the issues around unceded territory to a typically-Vancouver White/Asian couple.

A tiny dog ran up to me, eager for a sniff at my hand. After we’d made friends a nice lady came up, tucked the dog into her shirt, and told me a whole lot about what was going on. She started by saying I could make an offering at their fire with tobacco, which was provided, so I did. The cedar-and-tobacco smoke smelt great. Here’s the bench by the fire.

At the CRAB park homeless encampment.

I wish I’d asked her name. I make no claim to a deeper understanding than anyone reading this, so I’m just going to enumerate what I heard as I listened.

  1. Sometime in 2019 the BC government transferred the homeless file from a social-services ministry to the law-and-order ministry, and this has not been helpful.

  2. The encampments have inclusion levels depending, among other things, on their tolerance for alcohol or other intoxicants. I didn’t learn the level of the one I was at.

  3. A couple of rope-thin white guys were breaking up skinny pieces of metal. It seemed like hard work but they were cracking jokes. I don’t know what they were doing with them.

  4. People die in these communities, but rarely. An occasional overdose and recently a 51-year old who went to sleep and didn’t wake up. “People without a home have a life expectancy roughly half of people who do”, she pointed out.

  5. There are a lot of indigenous people here, but also a lot who aren’t.

  6. A guy walked up smiling with a bunch of plastic on a dolly and told her “Three tarps, they wanted sixty bucks and I had seventy, so I threw in an extra ten. Now you owe me $10.” She said “I’ll take care of it.” Still smiling: “S’ok, you don’t need to.”

  7. The provincial government is trying to get the homeless out of tents and into hotels. But they want to take the 65-plus and otherwise-vulnerable first. Then, they need an actual legal name; but suppose you have mental-health and/or legal issues (for both reasons, you might not be in a position to offer that kind of name) and/or you aren’t 65 yet? Well, you’re probably in a tent at CRAB park or equivalent.

  8. I couldn’t help but be conscious of the big black SUV labeled “Port of Vancouver Police” (I may have the wording wrong) at the far end of the parking lot, a silhouette visible in the front seat. Since the port is well-known to be gang-infested, you have to worry about what their cops are like.

  9. Some of the people have dementia (not always age-related). They do the best they can.

  10. There were canvas rain-covers set up with cases of fresh-water bottles available and other stuff I didn’t check out. The Portland Hotel Society helps out, I gather, as do other community members. I think I’m going to have to visit and drop some things off. But if it’s like most distressed-community situations, money is most helpful.

Walking away

Suddenly one of Vancouver’s biggest problems had moved out of the “abstract” category in my mind. I found I had trouble speaking because I didn’t know how to process the input.

I could still take pictures though, where the park met the port.

Near CRAB Park in Vancouver Near CRAB Park in Vancouver

If you’re a photographer, or just want to get closer to the edge of your comfort zone and meet people who could use your help, I strongly recommend a visit to CRAB park.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconTopfew Fun 18 May 2020, 9:00 pm

This was a long weekend in Canada; since I’m unemployed and have no workaday cares, I should have plenty of time to do family stuff. And I did. But I also got interested in a small programming problem and, over the course of the weekend, built a tiny tool called topfew. It does a thing you can already do, only faster, which is what I wanted. But I remain puzzled.

[To those who are interested in Amazon or politics or photography, feel free to skip this. I’ll come back to those things.]

What it does

Back in 2016 I tweeted Hello "sort | uniq -c | sort -rn | head" my old friend, and got a few smiles. If you feed that incantation a large number of lines of text it produces a short list showing which among them appear most often, and how many times each of those appears.

It’s a beautiful expression of the old Unix craft of building simple tools that you can combine to produce surprising and useful results. But when the data gets big, it can also be damn slow. I have a little script on the ongoing Web server that uses this technique to tell me which of my pieces are most popular, and how many times each of those was read.

In the last couple of weeks, my logfiles have gotten really big, and so my script has gotten really slow. As in, it takes the best part of a minute to run and, while it’s running, it sucks the life out that poor little machine.

It turns out to be a pretty common problem. Anyone who’s operating an online service finds themselves processing logfiles to diagnose problems or weirdness, and you regularly need to answer questions like “who are the top requestors?” and “what are the most common API calls?”

What it is

Topfew is a simple-minded little Go program, less than 500 lines of code and almost half of that tests. If you have an Apache server log named access_log, and you wanted to find out what the top twelve IP addresses were hitting your server hardest, you could say:

awk '{print $1}' access_log | sort | uniq -c | sort -rn | head -12

With topfew, you’d say:

tf -fields 1 -few 12 access_log (“tf” is short for “topfew”.)

The two would produce almost the same output, except for topfew would be faster. How much faster? Well, that’s complicated (see below)… but really quite a bit.

I don’t think there’s anything terribly clever about it; My bet is that if you asked a dozen senior developers to write this, they’d all end up with nearly identical approaches, and they’d all be pretty fast. Anyhow, as the links here make obvious, it’s on GitHub. If anyone out there needs such a thing and hasn’t written their own in a weekend project, feel free to use and improve.

Since I haven’t done a Go project from scratch in a long time, I’m not only out of practice but out of touch with current best practices, so it’s probably egregiously unidiomatic in lots of ways. Oh well.

How fast?

Well, that’s interesting. I ran a few tests on my Mac and it was many many times faster than the sort/uniq incantation, so I smiled. I thought “Let’s try bigger data!” and built a Linux version (saying GOOS=linux go build still feels like living in the future to me), hopped over to my Web server, and ran exactly those queries from the last section on a recent Apache logfile (8,699,657 lines, 2.18G). Topfew was faster, but not many times faster like on my Mac. So I ran it on the Mac too. Here’s a summary of the results. All the times are in seconds.

cp
time
sort/uniq
time
topfew
time
speedup
net of I/O
Linux17.2937.6626.752.15x
Mac1.40137.468.8118.36x

The first column (“cp time”) is the number of seconds it takes to copy the big file from one directory to another. The last column is produced by subtracting the “cp” time from each of the other two, leaving something like the amount of extra latency introduced by the computing in excess of the time spent just copying data, then comparing the results.

Anyhow, those are some puzzling numbers. Stray thoughts and observations that I’m not going to dignify with the term “hypotheses”:

  1. The Mac has a damn fast SSD.

  2. The Linux box’s disk is sluggish.

  3. Perhaps the basic tools like sort and uniq are surprisingly fast on Linux or the Go runtime is surprisingly slow. Or, you know, the mirror image of that statement.

  4. The sort/uniq incantation is inherently parallel while topfew is 100% single-threaded. Maybe the Mac is dumb about scheduling processes in a shell pipeline across its eight cores?

Giggling

I broke out the Go profiler (Go is batteries-included in ways that other popular languages just aren’t) and ran that command against the Apache logfile again. I was poking around the output and Lauren said “I hope you’re laughing at the computer, not at me talking to the cat.”

The profiler claimed that 73% of the time was spent in syscall.syscall reading the input data, and the first trace of actual application code was 2.27% of the time being spent in the regexp library doing field splitting (on \s+).

The core code that actually accumulates counts for keys was consuming 1.54% of the elapsed time, and 93% of that was updating the total in a map[string]uint64.

Next steps?

I’m probably curious enough to poke around a little more. And maybe someone reading this will kindly explain to me in words of one syllable what’s going on here. But anyhow, the tool is useful enough as it is to save me a noticeable amount of time.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconMeta Blog 13 May 2020, 9:00 pm

My recent Amazon-exit piece got an order of magnitude more traffic then even the post popular outings here normally do. Which turned my mind to thoughts of blogging in 2020, the why and how of the thing. Here they are, along with hit-counts and referer data from last week. Probably skip this unless you’re interested in social-media dynamics and/or publishing technology.

Numbers

In the first week after publication, Bye, Amazon was read somewhat more than 614,669 times by a human (notes on the uncertainty below).

The referers with more than 1% of that total were Twitter (18.13%), Hacker News (17.09%), Facebook (10.55%), Google (4.29%), Vice (3.40%), Reddit(1.66%), and CNBC (1.09%). I think Vice was helped by getting there first. I genuinely honestly have no idea how the piece got mainlined so fast into the media’s arteries.

For comparison, my first Twitter post is up to 1.015M impressions as of now and, last time I checked, Emily’s was quite a ways ahead.

It’s hard for me to know exactly how many people actually read any one ongoing piece because I publish a full-content RSS/Atom feed. Last time I checked, I estimated 20K or so subscribers, but nobody knows how many actually read any given piece. If I cared, I’d put in some kind of a tracker. That 614K number above comes from a script that reads the log and counts the number of fetches executed by a little JavaScript fragment included in each page. Not a perfect measure of human-with-a-browser visits but not terrible.

But aren’t blogs dead?

Um, nope. For every discipline-with-depth that I care about (software/Internet, politics, energy economics, physics), if you want to find out what’s happening and you want to find out from first-person practitioners, you end up reading a blog.

They’re pretty hard to monetize, which means that the people who write them usually aren’t primarily bloggers, they’re primarily professional economists or physicists or oil analysts or Internet geeks. Since most of us don’t even try to monetize ’em, they’re pretty ad-free and thus a snappy reading experience.

Dense information from real experts, delivered fast. Why would you want any other kind?

Static FTW

ongoing ran slow but anyone who was willing to wait fifteen seconds or so got to read that blog. One reason is that the site is “static” which is to say all the payload is in a bunch of ordinary files in ordinary directories on a Linux box, so the web server just has to read ’em out of the disk (where by “disk” I mean in-memory filesystem cache when things are running hot) and push ’em out over the wire. (The bits and pieces are described and linked to from the Colophon.)

It turns out that at the very hottest moments, the Linux box never got much above 10% CPU, but the 100M virt NIC was saturated.

I’ve never regretted writing my own blogging system from the ground up. I’m pretty sure I’ve learned things about the patterns of traffic and attention on the Internet that I couldn’t have learned any other way.

If I were going to rewrite this, since everything’s static, I’d just run it out of an S3 bucket and move the publishing script into a Lambda function. It’d be absurdly cheap and it’d laugh at blog storms like last week’s.

It’s not a top priority.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconResponses 6 May 2020, 9:00 pm

Boy, when your I’m-outta-here essay goes viral, do you ever get a lot of input. A few responses came up often enough to be worth sharing. This was via email, Twitter DMs, blog comments, and LinkedIn messages. All of which went completely batshit.

So, some answers. But first…

Thanks for the kind words!

I had no notion how the world might react to a cranky old overpaid geek’s public temper tantrum. The world’s been astonishingly warm and welcoming. Apparently I hit a hot button I didn’t know existed. The crankiest geek on the planet couldn’t fail to have their heart warmed. So in a huge number of cases, simply “Thanks for the kind words” was the right thing to say.

Dear readers: Yes, some of the answers described in this piece were kept handy in editor buffers and delivered by cut-and-paste. But I did read a lot of the messages, and all the ones I actually responded to.

“Can you come on our TV show?”

You name it: ABC, BBC, Bloomberg, CBC, CBS, CTV, CNBC, CNN, NBC, NPR, and a whole lot of cool blogs and indies. Also Anderson Cooper’s people!

“Can we get on the phone so I can ask you some questions?”

A variation on the theme, from non-broadcast organizations.

They all got the same answer: “Hmm, I'm not that interesting, just a grumpy old one-percenter white-guy engineer with a social media presence. If you want to go live with this story you should do it with the actual people who got fired, who are young, fresh-faced, passionate, and really at the center of the news story. I recommend reaching out to Emily Cunningham (contact info redacted) or Maren Costa (same) or Chris Smalls (same).”

“OK, we talked to them. Now will you talk to us live?”

These people were nice and just trying to do their job. I agreed to answer a couple of email questions in a couple of cases, but mostly just said “For complicated reasons, I don’t want to be the public face of this story. Sorry.”

“Complicated reasons?”

Yeah, the story is about firing whistleblowers, not about a random Canadian Distinguished Engineer’s reaction to it. So news organizations should follow the primary sources, not me.

There’s more. I put a lot of thought into what I should say, and then really a whole lot of work into writing that blog piece. I had help with style and fact-checking. (Thanks, Lauren. Thanks, Emily.) It is very, very, unlikely that anything I’d improvise on a phone-call or TV interview would be better. I’ve also had bitter first-hand experience with the Gell-Mann amnesia effect.

I think it worked. The news coverage, lacking alternatives, quoted heavily from the blog, and I thought basically all of it came out fair and accurate. Let’s acknowledge that this tactic is only available to someone who’s near the center of a news story, is a competent writer, and has a good place to publish.

I’ve no interest in becoming some sort of full-time anti-Amazon activist. I just don’t want to work for an organization that fires whistleblowers. I said so. It looks like the message got through.

“What response did you expect?”

I have seventeen years of blogging scars, so I speak from experience in saying: No idea. I’ve had blogs that I considered mightily important and labored over for days sink like a stone with no trace. Then I’ll toss off some three-paragraph squib that I wrote while watching TV and drinking gin, and it goes white-hot. Neither outcome would have surprised me.

“What were you trying to accomplish?”

I’m a blogger. I’ve been writing the story of my life here for seventeen years. Enough people read it and respond to give me hope that it’s at least intermittently interesting, and perhaps even useful. I’m a writer, I can’t not write. This is a major turning point in my life. I totally couldn’t not write it. That’s all. That’s really, really why.

“What about Brad’s piece?”

They’re asking about Response to Tim Bray’s departure by Brad Porter. Since he has the same “VP/Distinguished Engineer” title I did you’d think he’d be a peer. Actually he’s way more important and influential than I used to be, partly because he’s been there since the early days and is directly involved with the retail operation.

I believe that (as Brad says) Amazon retail is working intensely and intelligently to make the warehouses safer. But I also believe the workers. And anyhow, firing whistleblowers is just way, way out of bounds.

While it’s sort of a sideshow to the real issue here (firing whistleblowers), Brad also wrote:

Ultimately though, Tim Bray is simply wrong when he says “It’s that Amazon treats the humans in the warehouses as fungible units of pick-and-pack potential.” I find that deeply offensive to the core.

We’ll have to agree to disagree. If you run an organization with hundreds of thousands of line workers and tens of thousands of managers, and where turnover is typically significant, you need processes where the staff are fungible. Two things can be true at once: You work hard to preserve your employees’ health, and your administrative culture treats them as fungible units.

I actually found the patterns emerging in the responses to Brad’s piece more culturally interesting than his original post.

And hey, bonus: There’s another Amazon voice in the conversation: Anton Okmyanskiy, who’s a “Principal Engineer”, which is to say regarded as a world-class technologist, wrote Tim Bray quit Amazon. My thoughts.... It contains the remarkable sentence “Amazon should stay ahead of anger-driven regulatory enforcement by becoming a leader on social justice issues.”

It’d be great if a few more Amazonian voices weighed in. But I’m not holding my breath.

“Any regrets?”

Yes, I regret intensely that I didn’t link to Emily Cunningham’s original “Amazon fired me” tweet thread, which is exquisite (you have to click “show this thread”).

Favorite response?

Note, header not in quotes because nobody asked, but I’ll answer anyhow. I could drop a dozen portentous media-heavyweight names and yeah, pretty well everyone weighed in. But it’s not close, my fave was Wonkette: Amazon VP VIP Tim Bray Quitfires Self Over 'Chickensh*t' Activist Quitfirings. It says, of yr humble scrivener, “Come the revolution, let's remember not to eat this one” and “Class Traitor of the Day”. These lodged in what I thought was a thoroughly lucid and spirited take on the situation.

Once again

Thanks for the kind words!

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconBye, Amazon 29 Apr 2020, 9:00 pm

May 1st was my last day as a VP and Distinguished Engineer at Amazon Web Services, after five years and five months of rewarding fun. I quit in dismay at Amazon firing whistleblowers who were making noise about warehouse employees frightened of Covid-19.

What with big-tech salaries and share vestings, this will probably cost me over a million (pre-tax) dollars, not to mention the best job I’ve ever had, working with awfully good people. So I’m pretty blue.

What happened

Last year, Amazonians on the tech side banded together as Amazon Employees for Climate Justice (AECJ), first coming to the world’s notice with an open letter promoting a shareholders’ resolution calling for dramatic action and leadership from Amazon on the global climate emergency. I was one of its 8,702 signatories.

While the resolution got a lot of votes, it didn’t pass. Four months later, 3,000 Amazon tech workers from around the world joined in the Global Climate Strike walkout. The day before the walkout, Amazon announced a large-scale plan aimed at making the company part of the climate-crisis solution. It’s not as though the activists were acknowledged by their employer for being forward-thinking; in fact, leaders were threatened with dismissal.

Fast-forward to the Covid-19 era. Stories surfaced of unrest in Amazon warehouses, workers raising alarms about being uninformed, unprotected, and frightened. Official statements claimed every possible safety precaution was being taken. Then a worker organizing for better safety conditions was fired, and brutally insensitive remarks appeared in leaked executive meeting notes where the focus was on defending Amazon “talking points”.

Warehouse workers reached out to AECJ for support. They responded by internally promoting a petition and organizing a video call for Thursday April 16 featuring warehouse workers from around the world, with guest activist Naomi Klein. An announcement sent to internal mailing lists on Friday April 10th was apparently the flashpoint. Emily Cunningham and Maren Costa, two visible AECJ leaders, were fired on the spot that day. The justifications were laughable; it was clear to any reasonable observer that they were turfed for whistleblowing.

Management could have objected to the event, or demanded that outsiders be excluded, or that leadership be represented, or any number of other things; there was plenty of time. Instead, they just fired the activists.

Snap!

At that point I snapped. VPs shouldn’t go publicly rogue, so I escalated through the proper channels and by the book. I’m not at liberty to disclose those discussions, but I made many of the arguments appearing in this essay. I think I made them to the appropriate people.

That done, remaining an Amazon VP would have meant, in effect, signing off on actions I despised. So I resigned.

The victims weren’t abstract entities but real people; here are some of their names: Courtney Bowden, Gerald Bryson, Maren Costa, Emily Cunningham, Bashir Mohammed, and Chris Smalls.

I’m sure it’s a coincidence that every one of them is a person of color, a woman, or both. Right?

Let’s give one of those names a voice. Bashir Mohamed said “They fired me to make others scared.” Do you disagree?

[There used to be a list of adjectives here, but voices I respect told me it was mean-spirited and I decided it didn’t add anything so I took it out.]

What about the warehouses?

It’s a matter of fact that workers are saying they’re at risk in the warehouses. I don’t think the media’s done a terribly good job of telling their stories. I went to the video chat that got Maren and Emily fired, and found listening to them moving. You can listen too if you’d like. Up on YouTube is another full-day videochat; it’s nine hours long, but there’s a table of contents, you can decide whether you want to hear people from Poland, Germany, France, or multiple places in the USA. Here’s more reportage from the NY Times.

It’s not just workers who are upset. Here are Attorneys-general from 14 states speaking out. Here’s the New York State Attorney-general with more detailed complaints. Here’s Amazon losing in French courts, twice.

On the other hand, Amazon’s messaging has been urgent that they are prioritizing this issue and putting massive efforts into warehouse safety. I actually believe this: I have heard detailed descriptions from people I trust of the intense work and huge investments. Good for them; and let’s grant that you don’t turn a supertanker on a dime.

But I believe the worker testimony too. And at the end of the day, the big problem isn’t the specifics of Covid-19 response. It’s that Amazon treats the humans in the warehouses as fungible units of pick-and-pack potential. Only that’s not just Amazon, it’s how 21st-century capitalism is done.

Amazon is exceptionally well-managed and has demonstrated great skill at spotting opportunities and building repeatable processes for exploiting them. It has a corresponding lack of vision about the human costs of the relentless growth and accumulation of wealth and power. If we don’t like certain things Amazon is doing, we need to put legal guardrails in place to stop those things. We don’t need to invent anything new; a combination of antitrust and living-wage and worker-empowerment legislation, rigorously enforced, offers a clear path forward.

Don’t say it can’t be done, because France is doing it.

Poison

Firing whistleblowers isn’t just a side-effect of macroeconomic forces, nor is it intrinsic to the function of free markets. It’s evidence of a vein of toxicity running through the company culture. I choose neither to serve nor drink that poison.

What about AWS?

Amazon Web Services (the “Cloud Computing” arm of the company), where I worked, is a different story. It treats its workers humanely, strives for work/life balance, struggles to move the diversity needle (and mostly fails, but so does everyone else), and is by and large an ethical organization. I genuinely admire its leadership.

Of course, its workers have power. The average pay is very high, and anyone who’s unhappy can walk across the street and get another job paying the same or better.

Spot a pattern?

At the end of the day, it’s all about power balances. The warehouse workers are weak and getting weaker, what with mass unemployment and (in the US) job-linked health insurance. So they’re gonna get treated like crap, because capitalism. Any plausible solution has to start with increasing their collective strength.

What’s next?

For me? I don’t know, genuinely haven’t taken time to think about it. I’m sad, but I’m breathing more freely.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconMac Migration Pain 25 Apr 2020, 9:00 pm

What happened was, my backpack got stolen with my work and personal Macs inside. The work machine migrated effortlessly but I just finished up multiple days of anguish getting the personal machine going and thought I’d blog it while it was fresh in my mind. This might be helpful to others, either now or maybe later at the end of a Web search. But hey, I’m typing this ongoing fragment on it, so the story had a happy ending. And to compensate for its length and sadness, I’ll give it a happy beginning; two, in fact.

Happy corporate

The IT guys were excellent, they’re in quarantine mode too but somehow arranged on two hours’ notice that I could drop by the spookily-empty office and there it was on a shelf outside the IT room. I’d backed everything that mattered up straight to S3, it restored fast and painlessly, and all I had to do was re-install a couple of apps.

As I’ve said before, Arq worked well. But as you will soon learn, there are pointy corners and boy did I ever get bruises.

Happy Ukelele

When I’m writing text for humans to read, I like to use typographical quotes (“foo” rather than "foo"), apostrophes (it’s not it's), and ellipses (… is one character not three). These are included in every font you’re likely to have, so why not use them? The default Mac keystrokes to get them are awkward and counter-intuitive.

Since (blush) I hadn’t backed up the keyboard layout I’d been running for years, I went and got Ukelele, a nifty little software package that makes it absurdly easy to modify your Mac keyboard layout. Here’s a screenshot.

Ukelele

I produced a layout called “Typographic Quotes” which remaps four keys:

  1. Opt-L is “

  2. Opt-; is ”

  3. Opt-' is ’

  4. Opt-. is …

Anyhow, here it is (with a nice icon even), download and expand it and go looking on the Web for instructions how to add it on your version of MacOS, it’s not rocket science.

Another nice thing about Ukelele: It’s ships with maybe the best How-To tutorial I’ve ever encountered in my lifetime of reading software how-tos.

Who am I?

When I set up the new machine, it asked me if I wanted to transfer from an older one and I said no; so it unilaterally decided that my Mac username (and home directory) would be “/Users/timothybray” as opposed to the previous “twbray”. Sort of dumb and clumsy but you wouldn’t think it’d hurt that much. Hah.

Arq hell

My backup, made with a copy of Arq 5 that I paid for, is on our Synology NAS. I had to fumble around a bit to remember how to turn it into an AFP mount, then I went to arqbackup.com and hit the first Download button I saw. Once I fired up Arq I started, at a moderate pace and step by step, to be driven mad. Because Arq flatly refused to see the network filesystem. I tried endless tricks and helpful-looking suggestions I found on the Web and got nowhere.

Here’s a clue: I’d typed in my license code and Arq had said “Not an Arq 6 license code”, which baffled me, so I went ahead in free-trial mode to get the restore running. Wait a minute… all the backups were with Arq 5. OK then. So I went and dug up an Arq 5 download.

It could see the NAS mounts just fine but things didn’t get better. The backup directory looked like /Volumes/home/tim/Backup and there were two different save-sets there, a special-purpose one-off and my “real” backup, charmingly named “C1E0FFBA-D68B-4B51-870A-F817FC0DF092”. So I pointed Arq at C1E0… and hit the “Restore” button and it… did nothing. Spun a pinwheel for like a half-second and stopped and left nothing clickable on the screen. Nor was there any useful information in any log I could find.

Rumor has it that I impressed my children with the depth and expressiveness of my self-pitying moans.

So I went poking around to look at the backup save-set that Arq was refusing to read, and noticed that its modification date was toeday. Suddenly it seemed perfectly likely that I’d somehow corrupted it and blackness was beginning to close in at edges of my vision.

[Yes, dear reader, the Synology in turn backs itself up to S3 but I’d never actually tried to use that so my faith that it’d even work was only modest.]

I finally got it to go. Protip: Don’t point Arq at your saveset, point it at the directory that contains them. Why, of course!

So I launched the restore and, being a nervous kinda guy, kept an eye on the progress window. Which started to emit a series of messages about “File already exists”. Huh? Somehow I’d fat-fingeredly started two Arq restores in parallel and they were racing to put everything back. So I stopped one but it still kept hitting collisions for a long, long time. Anyhow it seemed to work.

Now, it turns out that when Arq started up it pointed out it was going to restore to /Users/twbray where they’d come from, so I pointed it at “timothybray” instead.

Lightroom hell

I really care about my pictures a lot. So I fired up Lightroom Classic, it grunted something about problems wth my catalog, and limped up in pile-of-rubble condition. I have a directory called “Current” that all the pictures I’m working on live in before they get filed away into the long-term date hierarchy, and it seemed to contain hundreds of pictures from years back, and my date tree didn’t have anything after September 2017, and all the pictures in Current that I could see were from my phone, none from any of the actual camera cameras.

I tried restoring one of the catalog backups that Lightroom helpfuly generates but that wasn’t helpful.

I may once again have made more loud self-pitying noises than strictly necessary. I poked around, trying to understand what was where and absolutely couldn’t make any sense of things. Scrolling back and forth suggested the Fuji camera photos were there, but Lightroom couldn’t locate them.

Eventually it dawned on me that Lightroom expects you to have your photos splashed around across lots of disks and so in its catalog it puts absolute file pathnames for everything, nothing relative. So the photos were all under “timothybray” and the catalog pointers were all into “twbray”. So how the freaking hell did Lightroom find all the Pixel photos!?!?.

Since I’d made probably-destructive changes while thrashing around, I just Arq-restored everything to /Users/twbray and left it there and now Lightroom’s happy.

Was my trail of tears over? If only.

Blogging hell

The software that generates the deathless HTML you are now reading was written in 2002, a much simpler and more innocent time, in Perl; in fact it comprises one file named ong.pl which has 2,796 lines of pure software beauty, let me tell ya.

It uses Mysql to keep track of what’s where, and Perl connects to databases via “DBD” packages, in this case DBD::mysql. Which comes all ready to install courtesy of CPAN, Perl’s nice package manager.

You get all this built right into MacOS. Isn’t that great? No, it isn’t, because it plays by Apple rules which means that relies on something called @rpath. Which can go burn in hell, I say. The effect of it being there is that things that load dynamically refuse to load from anywhere but /usr/lib, and your modern MacOS versions make it completely physically impossible to update anything on that filesystem. So in effect it’s impossible to install DBD::mysql using the system-provided standard CPAN. There’s this thing called install_name_tool that looks like it should in theory help, but the instructions include poking around inside object libraries and life’s too short.

SQL hell

You might well ask “Why on earth are you using mysql anyhow in 2020? Especially when it’s a dinky blogging system with a few thousand records?” My only answer is “It was 2002” and it’s not a good one. So I thought maybe I could wire in another engine, and the obvious candidate was SQLite, software that I’ve had nothing but good experience with: Fast, stripped-down, robust, and simple.

DBD::SQLite installed with no trouble, so I handcrafted my database init script to fix some trifles SQLite complained about, and fired up the blogging system. Boom crash splat fart bleed. It turns out that the language mysql calls “SQL” is only very loosely related to the dialect spoken by SQLite. Which one’s right? I don’t fucking care, OK? But it would have been a major system rewrite.

Docker hell

So I took pathetically to Twitter and whined at the world for advice. The most common suggestion was Docker; just make an image with mysql and Perl and the other pieces and let ’er rip. Now I’ve only ever fooled around unseriously with Docker and quickly learned that I was just not up to this challenge. Yep, I could make an image with Perl and DBD::mysql, and I could make one with mysql, but could I get one to talk to each other? Could I even connect to the Dockerized mysql server? I could not, and all the helpful notes around the Net assume you already have an M.Sc. in Dockerology.

So this particular hell was really my fault as much as any piece of software’s.

John Siracusa

Deus ex machina!

Among the empathetic chatter I got back on Twitter, exactly one person said “I have DBD::mysql working on Catalina.” That was John Siracusa. I instantly wrote back “how?” and the answer was simple: Don’t use the Apple flavors of perl, build your own from scratch and use that. All of a sudden everything got very easy.

  1. Download the source from Perl.org. Since I’d filled up /usr/local with Homebrew stuff, I worked out of /usr/local/tim.

  2. ./Configure -des -Dprefix=/usr/local/tim

  3. make

  4. make test

  5. make install

  6. PATH=/usr/local/tim/bin:$PATH

  7. cpan install DBI

  8. cpan install DBD::mysql

Then everything just worked.

Now, I have to say that watching those Configure and make commands in action is a humbling experience. In 2020 it’s fashionable to diss Perl, but watching endless screens-full of i-dotting and t-crossing and meticulous testing flow past reminded me that those of us doing software engineering and craftsmanship need a healthy respect for the past.

Was I done?

Of course not. A whole bunch of other bits and pieces needed to be installed from Homebrew and CPAN and Ruby Gems and so on and so forth, but that’s not a thing that leaves you frustrated or bad.

Boy, next time I do this, it’ll go better. Rght?

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconEarly 2020 Mac Upgrades 5 Apr 2020, 9:00 pm

With notes on microphones, Apple’s Magic Keyboard, laptop stands, and my [gasp!] new email client, MailMate.

Suddenly everyone’s working from home. Ours isn’t that big — in particular, only one office — and there are three other people working or studenting from it, so instead I’ve been using our boat. It’s cramped but scenic. As the number of weeks that I’ll be doing this stretches out into the unknowable future, there’s a chance for geek accessorization! I believe that in these days those of us who still have incomes should make a little effort to spread it around, especially when it might boost our productivity a bit, or anyhow our morale.

Audio

My job involves meetings every day. I viscerally loathe headphones after the first hour or so; in my tiny boxy office at work I have decent audio and a big ugly Shure MV51 mike that I don’t love; its aesthetic is brutalist and it doesn’t work well with Amazon Chime echo cancellation, so I often have to do the fast-finger unmute/mute when I want to talk.

Shure MV5

For the boat I got an MV5 which is smaller, prettier, and plays nicer with Chime. Highly recommended.

The boat has a stereo that while lousy includes Bluetooth, so I send the Mac audio there. The speakers in the cabin ceiling are shitty but still better than those on the ancient 2015 MBPro I’m nursing along until corporate starts shipping the new 16" with the good keyboard. Speaking of which….

Keyboard

I have a decent little Dell 24" 4K outboard monitor and I was just parking the laptop in front of it and using its keyboard and trackpad. And this wasn’t terrible but I was really short on desk space for tea and snacks and phone and notepad.

I’ve been using and liking Apple outboard keyboards forever so I got a “space grey” Apple Magic Keyboard and wow, it was good before but it’s better now. There’s no longer a distinction between wired and Bluetooth, they’re all wireless now, but come with a USB for charging.

To set it up you plug in the USB and by the time you look at the screen there’s a popup saying “Bluetooth keyboard paired kthsby.”

The key travel and feel are perfection, and it’s quiet, which I like and is basic courtesy in a person who lives in videoconferences.

Apple black Magic Keyboard

There’s one awful flaw though, which I discussed in 2008: The keyboard intelligently includes big phat arrow keys and the Home/End/Delete cluster, but stupidly insists on having the numeric keyboard, of interest only to accountants. I regret the waste of scarce desk surface.

Stand

I have two problems with the Mac being on the desktop. First, chronic neck pain from a lifetime of looking down at laptops. Second, everybody I teleconference with gets a view featuring my grizzled neck and chin. So I went looking for a stand and, on the Wirecutter’s recommendation, bought a Roost. It’s a really slick piece of design.

But take a moment and look at the enclosed instructions. I first unfolded them late at night in dim lamplight and thought they were glyphs scrawled in blackened blood by a giant tentacle on the vast walls of R’lyeh; words no human throat could form, the sound of which would rip men’s minds to bleeding shreds.

Roost laptop stand instructions

I’d failed to notice the instructions until I had figured out the device, which is easy enough. Just as well, my R’lyehian is weak.

Email

At work, the server is (I think) Exchange, and you can use more or less any client that can talk to it. I started with Outlook but quickly found the Office aesthetic soul-crushing. Then I used Mail.app for a while, but it’s just slooooow. Then I moved over to Outlook Web App and yeah, that Outlook flavor, but it’s freakishly fast. (There’s this thing called a Web browser where all you need to refresh is what the human is looking at, and OWA takes good advantage.) But the composer is lame and I had to refresh too often, losing the speed advantage.

MailMate

So a smart person at work recommended MailMate, saying it was “Quirky but fast and powerful” which I’d say is fair: As fast as OWA. I’m only a few days in but I’m not going back. Every single time I’ve thought “I want to make it do X” it’s turned out that there’s a nice straightforward way to get there.

One particular feature I love: It’s easy to turn off the red unread count in the Dock icon, which is a productivity plus: I want to pick the time, unprompted, when I turn my attention to my email.

I occasionally get a little lost navigating through deeply nested email threads (when something goes viral on the Amazon Principal-Engineers mailing list, you’re way into spaghetti territory). But I assume I’ll get the hang of it.

Boat as a Mac accessory

It’s not a big boat, just over 24' in length; so I’m cramped; there is exactly one place to sit, with very limited scope for squirming. There’s no HVAC system run by Other People keeping the temperature controlled.

WFB WFB

But; it’s a treat to be down by the water (roughly here). I’m hyper-aware of each day’s changeful weather story. I perceive the passage of seasons with microscopic precision as the sun’s path changes week to week and I adjust curtains to dodge the dazzle.

I have a 25-minute bike commute that puts opening and closing punctuation around the day.

I get visits from seals, cormorants, mallards, crows, and gulls, plus there are two eagles nesting in a nearby tree.

I have a little burner to make tea on, a tiny fridge for my sandwich, and a hard narrow berth for those irresistible nap attacks.

I’m fully aware that all this is down to my decades-long run of good luck, and that I’m once again enjoying hyperoverprivilege.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconPlague Journal, April 4 4 Apr 2020, 9:00 pm

I’m an optimist so I don’t put the year in the title. Once again: Writing is therapeutic. Open up whatever program you use to write stuff in, and see what comes out. Today’s adventure was the Socially Distanced Farmers’ Market.

Socially Distanced Farmers’ Market

They were organized as hell, the market subdivided into three Zones, each with its own line-up, social distance chalked on the sidewalk. The people density was unusually high in the neighborhood and a lot of people have started just walking down the middle of the street, screw the motorists. I find this cheering.

People are so open and friendly! Everyone has a smile for everyone and random conversations break out between strangers. This is usually a good thing, but in the Zone 1 lineup I found myself behind a conspiracy-theorist, talking about how the “higher ups” were getting ready to impose food rationing.

I think serious food shortages unlikely in the developed world, but parts of our agricultural industry depend for harvesting on poorly-paid abusively-treated migrant laborers. When they can’t come, we’ll see if agribiz lets the produce rot in the fields or raises wages enough to attract unemployed Canadians; and produce prices correspondingly.

Market menu

Potatoes. More potatoes. Damn, really a lot of potatoes. We have a stash but I bought fingerlings anyhow because they were cute. The first new harvest in mid-spring is rhubarb; I bet there’s some starting next week.

The shopping list comprised apples, rolled oats, salad greens, dill, chives, cilantro, and a red onion. I scored about 50% — by noon-ish when I got there, a lot of vendors’ shelves were looking bare. But also I got artisanal gin, handcrafted chocolate, and wildflower honey.

We’re having a virtual movie party with a friend this evening; Turtle Diary, I think.

Take time to be kind to each other; loved ones and strangers too.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconMacOS Lore, Early 2020 4 Apr 2020, 9:00 pm

I’ve used a Mac for an extremely long time; long enough that this blog’s topic tag is Mac OS X, what macOS used to be called. Herewith a looking-back summary.

Previously on Mac Lore

Clicking on that “Mac OS X” link cost me a half hour rat-holing on my journey over the years. So you don’t have to, here’s a compendium of advice I give people about the Right Way To Do It; although some of the recommendations are not Mac-specific:

  1. My Dock

    Your screen is wider than it is high. So put the Dock on the side not the bottom.

  2. Dock size and magnification are a matter of taste, but don’t auto-hide it, because…

  3. Remove from the Dock every app that you don’t use regularly. That way, everything there is either running or likely to be, and it becomes a useful visual status check. To the right is a snapshot of mine as I write this.

  4. Go spend some quality time in the System Preferences for Trackpad. Definitely turn on “Tap to click” and “Secondary click”. Then use the accessibility preferences to enable double-tap-and-drag.

  5. While you’re in System Preferences, make sure your keyboard repeat rate is turned up to the max; few things are more boring than holding down the spacebar or whatever and watching the cursor inch across the screen.

  6. Command-space, which brings up Spotlight search, is your friend. It’s really pretty good. Not enough people know that you can highlight things in the result list and type command-I to get a nice little popup with useful information about what you just found.

  7. Use the tab trick in your favorite browser for one-click access to things you care about. (When I wrote that piece it didn’t work in Safari, but now it does.)

  8. Keep a couple of browsers around. Chrome and Safari are both great on Mac, Firefox is OK but recently I’ve found it slow. It’s common to use one for work stuff and the other for personal. Another option is to be logged into Google in only one of them and Google-invisible in the other. Speaking of which, Safari is starting to have a strong privacy story.

  9. Preview

    Your browser will open PDFs directly and want you to read them there. Don’t. Download ’em and open ’em up in the awesome Preview app. Particularly if they’re big or complicated; Preview laughs at 500-page graphically-complex documents and provides a superior read/search/navigate experience.

  10. Despite the fact that Preview is great, do not try to use it to fill in legal forms. It will look like it’s trying to work, but it won’t. For that purpose (and that purpose only) go get Adobe Acrobat Reader.

  11. Related to Preview: Let’s assume you’re a professional who sometimes needs to show off your work. So use the command-control-shift-4 gesture to grab a piece of your screen, shift over to Preview, hit command-N and it creates a new graphic with what you just captured. The only fly in the ointment is that when you save it, it’ll want to use PNG and you almost always want JPG, so you have to toggle that on the Save menu.

    This is how I captured the Dock image above.

  12. Keynote

    If you have to give a presentation, use Keynote; it and Preview are Apple’s two truly great Mac apps. Do not go near PowerPoint, it’s a travesty on Mac.

  13. Learn to use the control-key navigation tricks. They make editing text — any text in almost any app — dramatically faster.

  14. Turn off all the notifications you possibly can. You should own your time. If you have a reasonably active life there will always be new things to read in mail and Twitter and Slack and so on; so go read them when you come to a stopping point. The only notification I leave on is the desktop Signal app, because you have to know me pretty well to reach me there. And (at work) mentions on Amazon Chime.

  15. Inbox Zero is a great idea but unattainable by most of us. Instead, try the Low-stress Inbox technique.

  16. Use a password manager. Really, please use one.

I wrote the first of these in 2002. I wonder how many more are in my future?

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconPlague Journal, April 2 2 Apr 2020, 9:00 pm

Hey folks, one decent therapy for times like these when the world’s trying to drive you crazy is to tell your story; doesn’t matter if anyone’s listening. This is adapted from an email to the family that got kind of out of control. Write your own #PlagueJournal entries and I’ll read ’em.

I feel vaguely like I’m setting a bad example as I cycle furiously on empty-ish streets across town each day to the boat and back; the smallest office I’ve ever had, but the view is decent.

The weather remains obstinately wintry, temperature refusing to venture out of single digits, which is OK when the sun’s out which it mostly isn’t. Nonetheless everything that can grow flowers is already showing them or expanding the buds as fast as possible. Mom was supposed to come visit us about now and there would have been lots to look at.

Downtown, people gather on their balconies and cheer wildly for three minutes for the caregivers at 7PM when the hospital shifts change; we were driving through the neighborhood last week, unsuspecting, as it exploded. Not a dry cheek in the car. Locally we’ve revived and expanded a long-neglected neighborhood mailing list, seeking people who might need some help; plenty are offering but everyone seems to be making a go of it. Anyhow, this very evening we gathered in a socially-distanced way at the end of the block to bang pots and drums and tambourines and clap hands for three minutes, lots of smiles and none forced.

Some people are much more monastic in their isolation: go shopping once every other week, stay inside. We find the grocery stores are sanitizing and social-distancing effectively, so we shop more often. Also we’d really like some of the restaurants to survive this thing so we’re getting take-out a couple or three times a week. Plus we go for lots of walks - there’s a new sidewalk courtesy where you make space for each other, stepping into the (empty) roads or on people’s lawns if need be. Very Canadian.

We pick charities and send them money but so much of the population was already living so close to the edge, these wounds will take a long time to heal.

We’re actually keeping in better touch with our friends than in healthy times, via Zoom and Skype and so on. Unfortunately what we talk about mostly is the plague news. I’m kind of tired of talking about it. One of the best military blogs is entitled War is Boring — well, so is Covid-19.

I, a data-driven numbers guy, find the daily recitation of statistics maddening, although everyone in the province loves our chief medic Dr Bonnie Henry, who has a silken voice and a Stoic demeanor. They give numbers like “number of cases” which I think means “positive tests”, a number that is entirely useless because the testing is (quite reasonably) directed at the most vulnerable and critical demographics. I am beginning to zero in on the number of cases admitted to hospital every day, or rather the rate of change in the number admitted — at least there’s clarity in what that means — and in BC, the rate of change in daily admissions is zero-ish. Which is not exactly good but not catastrophic. Catastrophic is New York today and Florida & Louisiana & Alabama & Texas looking forward four weeks. I don’t want to think about India and Africa.

Alberta is doing a little better than us, Ontario worse but not terrible, Quebec worse. But not bad like America, so far.

At work, we are running hot. Everyone’s stuck at home and using the Internet, which means us. Over on the retail side, the order spikes are frightening given that a lot of our employees are staying home for excellent reasons and the ones who are coming in have to work at half-speed due to constant disinfecting and social distancing. I see headlines in progressive publications saying how we are cruelly ignoring the plague conditions; and internal emails about all the products that are going on four-or-more-week delivery because they have to run everything extra-slow to protect the staff so they can keep shipping groceries and cleaning products. I really honestly don’t know what to think.

Our side of the company just has to make sure there are enough incremental waves of computers available each new day to keep Zoom and Netflix and ambulance-dispatch apps on the air.

The boy and the girl are both at-school-online right now, Lauren and I at-work-online. If it weren’t for the boat we’d be in trouble since the guest room is under post-asbestos-remediation reconstruction and we’re packed in pretty tight already 9-to-5 without me being in the house.

I’d really rather not be living inside a historically-significant news story. But we all are, so the only choice is to make the best of it. The virus doesn’t care how brave you are, only how smart you are.

In closing: https://xkcd.com/2287/.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconFacet: Push vs Pull 21 Mar 2020, 8:00 pm

If you want to process events, you can fetch them from the infrastructure or you can have the infrastructure hand them to you. Neither idea is crazy.

[This is part of the Event Facets series.]

When you make a request to fetch data, that’s called “pull”. Which is an off-by-one error; the “u” should be an “o” as in “poll”, because that’s how most network stuff works. I’ve heard old farts of my own vintage claim that on the network, polling is really all there is if you dig in deep enough. But we do use the term “push”, in which — maybe it’s just an illusion — the infrastructure reaches out and hands the event to you.

For example

I’ll use two AWS services that I work nearby.

First, SQS, which is pure pull. You make one call to receive a batch of messages, and then another to delete them, once processed. If they’re being pushed in the front end of the service faster than you’re taking them out, they build up inside, which is usually OK.

Pull

Pull!
Credit: Petar Milošević / CC BY-SA

Push

Push!
Credit: Rjhartley at English Wikipedia / CC BY-SA

In SNS, by contrast, events are dropped onto a “Topic” and then you subscribe things to the topic. You can subscribe SQS queues, Lambda functions, mobile notifications, and arbitrary HTTP endpoints. When someone drops messages onto the topic, SNS “pushes” them to all the subscribed parties.

On webhooks

“Webhook” is just another word for push. I remember when I first heard the idea in the early days of the Web, I thought it was complete magic: “I’ll just send you something, and the something will include the address where you HTTP POST back to me.” Asynchronous, loosely-coupled, what’s not to like?

There are a lot of webhooks out there. We talk to the EventBridge Partners who are injecting their customers’ events into AWS, and one thing we talk about is good ways to get events from customers back to them. The first answer is usually “we do Webhooks”, which means push.

On being pushy

On the face of it, push delivery sounds seductive. Particularly when you can back your webhook with something like a Lambda function. And sometimes it works great. But there are two big problems.

The first is scaling. If you’re putting up an endpoint and inviting whoever to push events to it, you’re implicitly making a commitment to have it available, secured, and ready to accept the traffic. It’s really easy to get a nasty surprise. Particularly if you’re building a public-facing cloud app and it gets popular. I can guarantee that we have plenty of services here in AWS that can accidentally crush your webhook like a bug with a traffic surge.

Then there’s administration. You want data pushed to you, but you want it done securely. That means whoever’s doing the pushing has to have credentials, and you have to trust them to manage them, and you have to arrange for rotation and invalidation as appropriate, and you really don’t want to be handling support calls of the form “I didn’t change anything and now my pushes are failing with HTTP 403!”

On pulling

Polling, on the face of it, is a little more work. If you care about latency, you’re going to have to have a host post a long-poll against the eventing infrastructure, which means you’re going to have to own and manage a host. (If you’re not latency-sensitive, just arrange to fire a Lambda every minute or so to pick up events, way easier.)

On the other hand, you need never have a scaling problem, because you control your own polling rate, so you can arrange to only pick up work that you have capacity to handle.

On the admin side, everybody already owns the problem of managing their cloud service provider credentials so presumably that’s not extra work.

I’m not saying push is never a good idea; just that grizzled old distributed-systems geeks like me have a sort of warm comfy feeling about polling, especially for the core bread-and-butter workloads.

A small case study

Ssshhh! I’m going to tell an AWS-internals secret.

I talk a lot with the SNS team. They have a front-end fleet, a metadata fleet, and then the fleet that actually does the deliveries to the subscriptions. It turns out that the software that delivers to HTTP-endpoint subscriptions is big, and maybe the most complex part of the whole service.

When I learned that I asked one of the engineers why and she told me “Because those are other people’s endpoints. They make a configuration mistake or forget to rotate credentials or send a flood of traffic that’s like eight times what they can handle. And we have to keep running in all those cases.”

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

FaviconEventing Facets 7 Mar 2020, 9:00 pm

What happened was, at re:Invent 2019 I gave a talk entitled Moving to Event-Driven Architecture, discussing a list of characteristics that distinguish eventing and messaging services. It was a lot of work pulling the material together and I’ve learned a couple of things since then; thus, welcome to the Eventing Facets blog series, of which this is the hub. It’s going to take a while to fill this out.

On-stage in Vegas

Eventing or messaging?

They’re just labels. If it’s publish-and-subscribe people usually say “event”; if there’s a designated receiver, “message”. But there’s a lot of messy ground in between; and in my experience, the software does more or less the same stuff whichever label’s on the cover. So I may simultaneously talk about eventing and use the word “message” to describe the byte packages being shuffled around.

Disclosure

I work for AWS and am a member of the “Serverless” group that includes Lambda, SQS, SNS, MQ, and EventBridge. I also support services that make large-scale use of Kinesis. You can expect this to have obvious effects on the perceptions and opinions offered herein. I’m also reasonably well-acquainted with Rabbit/MQ and Kafka, but without any hands-on.

Process

For each facet, I’ll introduce it here and write a blog piece or link to an existing one that covers the territory. In the process, I’ll build up a table relating how a list of well-known eventing/messaging services support it, or don’t; I premiered a version of that table in the talk (see the picture above) and more than one person has asked for a copy.

OK, here we go.

Facet: FIFO

When you inject events into the cloud, can you rely on them coming out in the same order? For details, see Facet: FIFO.

Facet: Deduplication

When you fire an event into the cloud, can you be sure it’ll only come out again once? See Facet: Deduping.

Facet: Subscribe or Receive

When there’s an event in the cloud, how many different receivers can receive it? Just one? See Facet: Point-to-Point vs. Pub/Sub.

Facet: Serverless or Broker

Does your eventing infrastructure look like a cloud service, just an endpoint in the cloud? Or is it an actual machine, or cluster of machines, that you own and operate and connect to? For details, see Facet: Broker vs Serverless.

Facet: Push or Pull

Do you have to poll the cloud for events, or can you get them pushed to you? For details, see Facet: Push vs Pull.

Facet: Filtering vs firehose

When you reach out into the cloud and use a queue or broker or bus to receive events, the simplest model is “I subscribe and you send me everything.” Often this is perfectly appropriate. But not always; I’ve seen software subscribed to high-volume sources that starts with really dumb code to look at the messages and decide whether they’re interesting, and ends up throwing most of them on the ground.

In recent years I’ve increasingly noticed the infrastructure software offering some sort of built-in filtering capability, so you can provide a query or pattern-match expression that says which events you want to see.

We built this into CloudWatch Events (now known as EventBridge) back in 2014, and people seemed to like it. Then a couple of years ago, SNS added filters to subscriptions and, since then, I’ve noticed that a steadily-increasing proportion of all subscriptions use them.

So I’m pretty convinced that some sort of filtering capability is a basic feature that eventing and messaging software really need. There’s no standard for how to do this. Some of the filtering expressions look like SQL and some look like regular expressions.

{
  "vendor": [ "vitamix", "instapot" ],
  "product_detail": {
    "price_usd": [
      { "numeric": ["<=", 150] }
    ]
  }
}

I don’t think this subject deserves its own article, especially since back in December I wrote Content-based Filtering, which describes on particular flavor of this technology that looks like the sample above, i.e. not like either SQL or regexes. I think the piece gives a pretty good feeling for the power and utility of event filtering generally.

Still to come

Here are the ones I know I’m going to write about, and maybe there’ll be more: Payload Flavors, Records vs. Documents, Uniform vs Heterogeneous Events, Replaying and Sidelining. Like I said, this will take a while.

Summarizing

Since this table includes references to software that I’ve never actually used, it’s perfectly possible that my research has been imperfect and there are errors. Please let me know if so and I’ll dig in deeper and correct if necessary.

ActiveMQ Artemis Event
Bridge
Kafka Kinesis RabbitMQ SNS SQS SQS
FIFO
FIFO?YesYesNoYesYesYesNoNoYes
Dedupe?*YesNoYesNoNoNoNoYes
P2P or
Pub/Sub
BothBothP/SP/SP/SBothP/SP2PP2P
Serverless
or Broker
BBSBBSSS
Push
or Pull
BothBothPushPullBothPushPullPull
Filtering?YesYesYes*NoNoYesNoNo

Notes

* It looks like the software can do this but either it isn’t straightforward or doesn’t work out of the box.

Kinesis is a cloud service but since you have to provision shards, it kind of acts like a broker too. On the other hand, adding and deleting shards is possible if a bit awkward.

Kinesis is pull-based, but there’s the Kinesis Client Library which gives a very push-like feeling.

Blinklist Blogmarks del.icio.us Digg Ma.gnolia My Web 2.0 Newsvine Reddit Segnalo Simpy Spurl Wists Technorati

Page processed in 1.467 seconds.

Powered by SimplePie 1.2.1-dev, Build 20110721192037. Run the SimplePie Compatibility Test. SimplePie is © 2004–2020, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.