tech, simplified.

It’s easy to love books, with their beautifully designed covers, tactile pages, and musty scent of ink. You might even track down a specific edition of a paper book, just because you like its cover better or want a copy of the first printing of a favorite volume.

eBooks, not so much. We love ebooks for specific things. We buy them on an impulse and read them moments later, adjust the font size to fit our eyesight, search for any word in the book, and get a list of highlights when done. We love the features, not the eBooks themselves. Rare indeed would be the person who tried to track down the first edition of an eBook. Now that eBooks are often not even cheaper than the print edition, it’s easy to wonder if it doesn’t just make more sense to go back to print.

Which got me wondering: How did books end up this way, and what if eBooks could be better? That turned into a history of the book and its metamorphosis into an eBook, and how the latter paved the way for today’s AI revolution.

→ Continue reading on Every: In Pursuit of a Better Book

Radio was the original overhyped technology

The tech hype cycle started a century ago.

photo of a classic radio

It was a plaything for the wealthy, a tool for the powerful, hailed as a democratic gift to humanity.

It was geeky, technical, jargonistic. You’d debate the merits of the smallest changes, remix and reinvent, preorder the latest take months in advance, then watch the shipping date slip further into the future.

It spurred innovators to greatness and grandiose. “There are no imaginable limits to our opportunities,” a government commissioner would enthuse.” “I aim at Tesla,” said the self-styled father of this new technology.

It would bring the best of times; nothing else could possibly “touch the lives of all people more intimately,” as one put it. It’d also bring the worst of times; it could “suppress and distort fact,” even “stir up mob violence,” another worried.

We were promised driverless vehicles a century ago, and all we got was the radio.

Into the ether

diagrams of wiring in a radio

It started out as a spark. We’d tamed lightening, as our ancestors had tamed fire. We’d brightened the darkness, built the first automobiles, sent messages on wires coast to coast.

“Do you think there is a limit to the possibility of electricity?” Thomas Edison was asked in 1896. “No,” replied Edison, “I do not.”

Edison then fretted that the next innovations might not surprise us; “Nothing now seems to be too great for the people to comprehend.”

He needn’t have worried. “Tesla foretold of a day,” Thomas S. W. Lewis wrote of Edison’s archrival, “the seemingly magic electrons would enable messages and sounds to be send across great distances without wires.” That’d spark the next generation of innovations, kick off a tech cycle that’d transform the world—or so they dreamed.

It started out with wireless Twitter—or rather, with Guglielmo Marconi’s wireless telegraph, sold first to the British Navy in 1897, then used in 1899 to report on international yacht races. “The possibilities of wireless radiations are enormous,” said Marconi to a reporter. Then he started the first wireless company, sent a transatlantic message through the air two years later.

And the race was on.

Tinker, transmitter

First came ego and eccentrics. “Some days I don’t sleep,” Edison would claim when asked about his work/life balance. “I must be brilliant, win fame,” wrote Lee de Forest, the self-styled father of radio, in his journal. “I aim at Tesla.”

But maybe it takes the crazy ones. You can’t just invent the future; you have to sell it, too. Marconi was ready for both, with “the vision to harness the discoveries of others” and add a few of his own, combined that with “the skill of a P.T Barnum” to promote the ideas.

So you’d try crazy things until something worked. Marconi “absently placed one part of his aerial on the ground while holding the other part in the air,” and voilà, antennae would live on roofs for the following century.

You’d remix. Edison discovered the “Edison effect” as carbon passed from a lightbulb’s filament to the glass bulb, John Fleming turned that into the diode, then de Forest perfected it with a battery, circuit, and zig-zagged nickel wire to amplify the radio signal with his audion—an early take on the triode that lives on today in high-end amps.

It was geeky, a hacker’s paradise of parts and schematics, a new frontier where you could broadcast your ideas. You had to learn new science, of intercepting the signal, tuning into a broadcast, and amplifying the audio to hear it. It wasn’t for everyone; it was for those who took the time to obsess over the smallest details in journals like The Phonoscope, Wireless Age, and Radio Annual.

It was eccentric, the next big thing.

The hopes and fears of all the years

boys listening to radio

And then radio was everywhere, the new thing everyone couldn’t get enough of. “Soon the human ear and imagination became insatiable,” wrote Lewis of the radio. “People wanted more of everything—music, talk, advice, drama.”

It’d rescue you, from shipwreck and snowstorm, fire and flood. It’d educate; “There are no imaginable limits to our opportunities,” enthused the US Commissioner of Education J.W. Studebaker.

It promised a driverless future, even. “Steer a ship from a distance?” repeated New York Herald reporter to Guglielmo Marconi, after his comment about the potential of the wireless telegraph. “Certainly.”

And yet, the dreams were paired with anxiety. Governments imagined radio’s potential for spying; citizens worried it could tell their darkest secrets. “It’s going to be embarrassing if the collection agencies start a broadcasting station” and broadcast the names of debtors, a letter to the New York Telegram imagined.

Radio, indeed, could threaten everything. It could “threaten our whole telephone system, I may add, our whole newspaper system,” theorized a chief Marconi engineer.

Fake news became the new worry. Radio “can inform accurately and so lead sound public opinion; or it can suppress and distort fact and so grossly mislead its hearers,” wrote National Broadcasting Co.’s Dr. James Angell in 1939. “It can stir up mob excitement, even to the point of violence,” he said without citation.

Freedom itself was in question. “Radio, in a democracy, is of tremendous importance, of far larger importance than we yet realize,” wrote H.V. Kaltenborn in the same publication. And so, National Broadcasting’s Angell teased out the question: Should radio be “controlled by the rulers,” or should it be free, or free but with the oversight that it is “never abused?”

Even equipment came up for debate. Should ships rent radios, or buy them? “The French pride possession,” a debater argued, while “In this country we get better service and better terms by rending our telephones,” debates that echo those over smartphone subsidies a century later.

We’d gone from wires to wireless in a couple decades, from debating how to build the best radio to how it should be used. Frequencies and filaments took a backseat to policy and practice.

And just as quickly, it faded into the background as just another part of life—important, a new part of the fabric of society, even, yet hardly as consequential as was once imagined.

It was just the radio, after all.

I am for peace, but when I speak…

a diagram of an Audion, a predecessor of the triode

There’s something deeply human about overestimating the change new technology will bring.

“In the future there need be no disputed readings, no doubtful interpretation of text or delivery,” wrote The Phonoscope’s inaugural edition in 1896. “Death has lost some of its sting since we are able to forever retain the voices of the dead.”

And that was for the phonograph, with voices etched in resin.

Radio elicited loftier ideas. de Forest was driven by visions of a utopia, one with “no war … easy & rapid & cheap transit.” Tesla, at a Radio Institute dinner in 1915 on the eve of World War I, hoped that “wireless would prove an agent of peace in binding the nations closer together,” even as the gathering included “many nationalities, notably those of belligerent countries.” even as Marconi—the inventor of wireless telegraphs—had arrived to New York for the event aboard the Lusitania.

13 days later, the illusion shattered, as a German U-boat’s torpedo sank the Lusitania and brought American into the war. Soon the United States would be recruiting radio operators, adding Radioman as a new rank in the Navy.

The tide of the Great War, nay history, was turned on the airwaves. Peace would come, but would have to be battle-tested first.

And then we’d dream again. Even before the dust settled, the 1918’s Radio Telephony textbook was dedicated to radio as the “promoter of mutual acquaintance, of peace and good will among men and nations.” The dream of technology changing the world would live on.

Lasting peace proved elusive, technology regardless. War or the rumor thereof, if anything, provided the spark and sponsorship to push technology forward.

Then came the computer, to crack wartime codes and calculate where bombs would burst and tabulate first the government then the international business world’s data. Then came rockets and the space race, ostensibly to put a man on the moon—or a missile on your foe. Then came the semiconductor, to fuel that space race, then be the brains behind the software that would eat the world.

And once again, we’d decry the the privacy implications of the latest technology, puzzle the geopolitical ramifications of who owns and who copies the technology, worry it was ruining everything. And we’d dream again that it’d be the end all, cure everything, bring the world together, make our self-driving vehicles actually happen this time. It’d bring out the best and worst of us.

And perhaps, like radio, decades later it’d be just a thing, a bit of nostalgia, something that got us through our days, something that occasionally made everything better and other times made everything worse.

It’d be like everything else humans make. Soon enough, we’d move on to the next greatest thing.


Originally published on the now-defunct Racket blog on August 5, 2021. Tree photo by Fabrice Villard via Unsplash. Radio photo by Markus Spiske via Unsplash

How to give a speech people will remember.

Don't tell us what you're going to tell us. Just tell us.

Keep your eye on the clock at the back of the auditorium, they say, as an easy hack around the flight reflex of stage fright. You need something to steady your focus, something to channel your fright into the speech of your life (or of the hour at hand, at least). The clock ticks; the audience waits; you could hear a pin drop. All eyes are on you—and you haven’t the slightest idea where to begin.

So you waffle. “Hi, so glad you’re here, today we’re going to talk about…” and with that, you lost the audience. The moment to start strong, passed.

“Tell them what you’re going to tell them, then tell them, then tell them what you told them,” goes the famous advice that guides schoolchildren through book reports and class presentations (advice that comes from the British pulpit rather than Aristotle, it turns out).

It might work. You might end up with something good along the way, might improvise your way into saying something quotable while trying to fill the silence.

But you might do even better just saying the single thing you want to say.

Let the words fall out.

It’s not always easy.

"This is a day I've been looking forward to for two and a half years," started Steve Jobs in his now-famous keynote unveiling the iPhone, seemingly working up the energy to move forward. Then he picked up speed, made the classic call to history: “Every once in a while a revolutionary product comes along that changes everything.” And the next hour was a blur.

The speeches that echo through history often start with that reflective look. “Fourscore and seven years ago our fathers brought forth, on this continent, a new nation, conceived in Liberty,” intoned Abraham Lincoln over the Gettysburg cemetery during America’s civil war, a phrasing Martin Luther King Jr. would echo decades later in I Have a Dream. Churchill, too, reached for history when marshaling the British will to fight. As did Kennedy when he chose to go to the moon, invoking 50,000 years of human progress as the impetus to carry on.

“Introductions should tease,” advises the official TED talk guide. Which is how Steve Jobs could get us to listen to One More Thing after he’d already told us things for an hour. History's one way in, but not the only one. Shared experiences, universal stories we all recognize, they're equally powerful hooks.

And so Bill Gates reminds us of childhood fears of nuclear war, to prime us to listen to the dangers of the next pandemic a half-decade before we'd understand what he was trying to tell us. David Blaine tells us that he tries to do things doctors say are not possible. Elon Musk, after being asked why he's boring, tells us he's building a tunnel under LA. "I've been blown away by the whole thing," said Ken Robinson to start his talk. "In fact, I'm leaving."

You can’t help but want to listen after that.

And as a speaker, you can’t help but continue on, the energy of that opening propelling you into the key point you showed up to tell. You’re not here for pleasantries, for repetition and outlines and thank you’s. You’re here to tell a story—the sooner, the better.

To do that, you have to jump right in.

Cut to the chase

“Start strong,” says the TED team. “You’ll want to open people’s minds right from the very start.” Tell them something they'll relate to, something surprising, something that confirms their suspicions or challenges their assumptions, something that gives you an opening to speak on.

And then you’ll have to stop yourself.

The earliest TED talks weren’t 18 minutes long. They were hours-long presentations, filled with your normal extraneous ramblings, until TED founder Richard Saul Wurman would stand up on the stage, signaling it might be time to wrap things up.

You might not have the luxury of knowing when people’s phones light up, their heads start to nod, and you’re talking to a crowd and no one at the same time. You might be talking to your screen, posting podcasts and videos into the void, relying on view stats to guess that people liked what you said. Dig deeper, though, and you might find people only listen to half of your video, skim the middle of your blog posts, start listening to your podcast then skip on to the next when the energy dies down.

You’ve got this tiny window to hold people’s attention after your opening sentences grabbed it. You can’t just drone on forever.

So say just enough, and stop.

You’ll have to cut stuff. “I believe some of innovation is about subtraction,” said Wurman. The same goes for telling your tale.

“I have made this longer than usual because I have not had time to make it shorter,” Pascal is said to have written. Woodrow Willson applied the same to speeches: “If it is a ten-minute speech it takes me all of two weeks to prepare it … if I can talk as long as I want to it requires no preparation at all. I am ready now.”

You might have all the time in the world to talk, want your fleeting moment of fame to linger. But your audience has a limited attention span, and it's not merely enough for them to hear you. You want them to remember. And for that, the two weeks of prep for a solid 10-minute talk are worth it.

Start strong, tell your thing, then you’re done. No need to keep going. People have other, better, things to do.

There’s a reason you’ve likely listened to more TED talks than full-length lectures: They’re short, sharable, “the length of a coffee break,” remarked TED curator Chris Anderson.

TED’s restrictions force speakers to hone their talk, to in Anderson's words make them “really think about what they want to say.”

It’s hard not to find time to listen, especially when you know the speaker put the work in ahead of time, made sure they’d not waste your time.

Put that effort into your speech, trim it down until it’s something people will have time to listen. You’ll end up with a single thing you wanted to say. You’ll be ready to jump right in and say what you’re going to say, without all the repetition.

When it’s done, you won’t have to remind people what you told them. They’ll remember. They’ll spread the word, tell others they’ve got to take a few minutes and listen.


Originally published on the now-defunct Racket blog on July 7, 2021.

The Windows 96 Story

“You just learn one thing, and that’s the browser,” quipped Bill Gates while showcasing the then-upcoming Windows 98.

The empire that Microsoft had built piecemeal—software languages here, DOS and Windows there, Office and a software ecosystem tying it all together—was suddenly threatened by the web. The earliest web apps promised you could run anything, anywhere. A browser, not the latest operating system, was all you’d need.

Ignoring the web wasn’t possible. Microsoft’s infamous Embrace, Extend, Extinguish philosophy would have to work instead.

So they acquired Hotmail, one of the first web apps, and built the web so deeply into Windows 98 the US Government would accuse Microsoft of using Internet Explore to maintain a monopoly.

Gates correctly recognized that browsers were the last app we’d learn how to use, that so much of the software to come would be browser-based SaaS.

Yet somehow, it seems unlikely he’d have imagined that decades later, a browser would be all you’d need to run Windows 98—or at least a facsimile its most memorable features.

Rebuilding the past.

Windows 96, running in Safari

We run everything in the browser today: Slack, and Figma, and Superhuman, and Airtable, and Google Docs, and so many of the other tools that make today’s work happen.

So why not run Windows in the browser, too?

That was—in part—the idea that got ctrlz and their fellow students to painstakingly recreate the Windows of the ’90’s in the browser with Windows 96. It’s a passion project that lets you relive some of your formative computing memories—and it started with a chance encounter.

“Back around 2016, I saw the Ubuntu online tour,” wrote ctrlz, before coming across the Windows 93 online desktop the following year. “I was fascinated with the concept of running a web desktop inside the browser”—even if these earlier attempts were largely non-functioning demos. So they set out to build their own. Unlike so many of the other web desktops—including Microsoft’s own Live Mesh—ctrlz’s project wouldn’t try to imagine what the future could look like, rethink how a desktop could look if it lived in the browser. It’d recreate computing’s past, in a brand new way.

And so, hand-me-down MacBook Pro in hand, ctrlz started coding first a Windows XP-style web desktop built with static images in 2017, then a Windows 10-style UI in 2018. But newer didn’t make it better. “I wasn't happy with the way it looked,” said ctrlz, “so I eventually settled for a 9x interface in early 2019, when I decided to go ahead and make something of it.”

Soon enough, they and a team of students had recreated the operating system they’d first used on aging school computers—rebuilt using the latest web tech.

“I'm compelled to say ‘Magic’,” replied ctrlz when asked how they got so many things to work in their browser-OS, “but really, it’s a combination of WebAssembly and also intense problem solving.” It took a month to build the file system, something ctrlz is most proud of, while UTF-4096, another team member, is still working to build an AirDrop-style peer-to-peer tool to share files between Windows 96 users. “It's really about knowledge of available JavaScript APIs and finding ways to apply them to implement concepts found in contemporary operating systems,” says ctrlz.

And, there’s an ecosystem of open-source that makes Windows 96 tick: JS-DOS powering DOOM and other classic games, Visual Studio Code’s editor powering the Monaco code editor, and even an upcoming Linux-based installable version of Windows 96 with a C/C++ SDK.

A BSOD error in Windows 96

And so came together what @westoncb on Hacker News called “the nicest one of these I’ve seen,” a web OS that “seems to actually work in a non-superficial way.” There’s the familiar start menu, along with Windows themes from the default ’98-style to XP’s greens and Vista’s glass. There’s a terminal, file Explorer, text and code editor, even a more modern App Store with games and tools to install. It’ll blue screen if you click the right thing, complain about DLL errors if you try and fail to activate Windows. It’s a time capsule of computing—and a showcase of what’s possible with today’s web technology.

And then it blew up.

It all started with a simple idea: “Make a WebOS which can store files and run reasonably complex applications in an efficient way, whilst also being based on a familiar user interface that people understand.”

It’s a combination of technology and nostalgia that, in a roundabout way, managed to fulfill what Gates envisioned when launching the original Windows 98, a world where the browser was the only app you’d need.

And it manages to be less glitchy than its near-namesake, the one that a USB printer infamously gave a Blue Screen of Death during that original demo of the operating system where Microsoft staked its internet future.


Originally published on the now-defunct Racket blog on August 2, 2021.

You don't need video. Audio may be enough.

picture of a black microphone

Lights. No cameras. Action. Maybe David Foster Wallace wasn’t jesting when he said video calls would be more trouble than they’re worth.

Telling a business partner you’d be competing with them would never be easy, especially when both businesses still needed to work together.

So imagine a young Bill Gates telling Steve Jobs that Microsoft was building Windows and competing directly—on an operating system level, at any rate—with the Macintosh.

“We had to take a walk,” relayed Gates decades later to Jobs’ biographer Walter Isaacson. Jobs started the meeting taking Gates to task for building what he saw as a rip-off of Apple’s work—where Gates countered both companies had worked off Xerox’ innovation. The meeting was going nowhere. And so Jobs had to get out of the office, asked Gates to go for a walk. Maybe it was the clean air, the open sky, the grounding effect of nature. But it worked: “That was when he began saying things like, ‘Okay, okay, but don’t make it too much like what we’re doing,” recalled Gates. Crisis averted, at least momentarily.

*** 
In the intervening years, as Jobs’ vision of a friendly computer and Gates’ of a computer on every desk morphed into today’s always-on computing, we all left the office. We took our word processing and coding, emails and chats, Photoshop and spreadsheets, into coffee shops and lunch queues. We’d think of things that needed done and do them all, anywhere.

All, that is, except meetings. Simply by virtue of computers and phones including cameras, video calls became the standard for remote meetings. It’s harder if anything to start a group voice call today; video reigns supreme in the workplace.

Thus my momentary surprise when, sometime in 2017, my editor started our weekly 1-1 meeting in Slack. My phone buzzed with a Slack notification, I tapped it, and seconds later I was on what seemed like a phone call—voice, with no video. Slack, thoughtfully enough, starts voice calls by default. Video’s a button tap away if you need it, but you might not. I brought the phone to my ear and we chatted about work and goals, no video or even headphones needed.

Maybe we didn’t need video calls for everything. Maybe audio is enough. And maybe it could get us over our collective Zoom fog.

This is what you want, this is what you get.

AT&T's original video phone, the MOD I, from 1964.
AT&T's original video phone, the MOD I, from 1964.

AT&T's original video phone, the MOD I, from 1964.
“Through television and telephony we shall see and hear one another as perfectly as though we were face to face, despite intervening distances of thousands of miles,” predicted the original Tesla in 1926, imagining a future smartphone small enough, “a man will be able to carry one in his vest pocket.”

The future came, sure enough. We got Twitter instead of flying cars, but video calls came almost exactly as they’d been imagined. If phone calls had revolutionized communications, video calls would surely do even more.

We were so excited about video calls being possible, we collectively rushed into a future that neglected audio calls, with video being the default way to talk in Skype, Messenger, FaceTime, and more. “In some ways we kind of skipped over audio on the internet,” Racket founder Austin Petersmith remarked to Protocol’s David Pierce. “The amount of experimentation of video is orders of magnitude more than audio, and I think that's because there's a tendency to go toward the highest-fidelity thing available.”

When every phone comes with a camera, surely we should use it.

Or perhaps not. Around the same time IBM added the first webcams to laptops in 1996, author David Foster Wallace in Infinite Jest imagined a world where we’d try video calls, then decide they were more trouble than they were worth.

Traditional phone calls, to Wallace, were brilliant for the information they withheld, the fiction they created. “Good old traditional audio-only phone conversations allowed you to presume that the person on the other end was paying com­plete attention to you,” wrote Wallace. We could do anything, look any way you wanted, and the person on the other end of the line would be none the wiser.

Video calls, he predicted, “rendered the fantasy insupportable.” We’d had to perform for the call, act like we were looking at the speaker, look presentable at the last moment. “There was … no answer-as-you-are informality about visual-video telephone calls.” And perhaps that's what we liked about phone calls all around, without realizing it.

And so, in Infinite Jest’s imagined world, humanity would switch to video calls for a year, then quickly shift 90% of calls back to audio-only.

After a year where everything important has been held over video calls, it feels like Wallace may have not been too far off the mark.

Mirror, mirror on the screen

Searches for "Zoom Fatigue" peaked the end of March 2020, as much of the world went into lockdown and seemingly every interaction was suddenly over video calls.

It turns out, it’s tiring, being on camera all day.

You have to look at the camera, fake eye-contact with the speaker. But “in real life, how often do you stand within three feet of a colleague and stare at their face? Probably never,” surmised Liz Fosslien and Mollie West Duffy in the Harvard Business Review. “Having to engage in a ‘constant gaze’ makes us uncomfortable — and tired.”

You have to perform, keep up appearances, seem engaged and attentive. “When you're on a video conference, you know everybody's looking at you; you are on stage, so there comes the social pressure and feeling like you need to perform. Being performative is nerve-wracking and more stressful.” As the BBC quoted INSEAD professor Gianpiero Petriglieri: “You cannot relax into the conversation naturally.”

And for all your effort, tech too often gets in the way. Your mic may not work, your video call app might crash, a siren might go by right as you need to talk. Even a slow internet connection can make you come across worse to your colleagues, as a 2014 study by Berlin Institute of Technology researchers found that when the audio is delayed more than 1.2 seconds, participants “are perceived less attentive, extraverted and conscientious.”

You’re performing. You’re trying to decipher unclear facial cues. Add context collapse—where every event happens in the same space, without the familiar separation between work and leisure spaces—and your brain never has time to turn off. All that together gives us “Zoom fog,” the mental blurriness after one too many video calls, and the dread of needing to do more.

Call me. Video, maybe.

And so, just as remote meetings are more common than ever, Infinite Jest's prediction that we might “actually prefer the retrograde old low-tech Bell-era voice-only phone interface after all” is new trend in the remote startup world, at least.

“Video makes many conversations worse,” Tweeted Basecamp CEO Jason Freed, after decades of experience leading one of the first remote companies.

He’s not alone. The Gumroad team is switching meetings to audio talks after founder Sahil Lavingia says they found “Video is largely unnecessary.” “We almost never use our cameras for internal meetings,” shared @MikeRaia in a Capiche discussion about video call defaults, where @briana9 mentioned “My team is mostly video off.”

Hiten Shah—founder of CrazyEgg, KISSmetrics, and FYI—shared that “having remote meetings while walking” and “sitting in nature during a meeting” were some of the things he does to make remote work more healthy, tips one suspects Jobs might have enjoyed. You could walk during video meetings, at the risk of making your team seasick—switch to audio calls, though, and you can move as much as you please.

You could even take it to the logical extreme and drop meetings altogether. Todoist founder Amir Salihefendić mentioned that his favorite company perk is “Async-first. This enables you to live wherever and work when it suits you.” You could share your thoughts in Slack or record them in tools like Yac, and rethink if meetings are even necessary. Or, go in the other direction, and use something like Discord with always-on audio chat in the background to talk whenever you want, spontaneously.

Whether spontaneously, location-independent meetings are what your team needs, or working more async, audio’s part of making it happen. You can turn off the camera, relax a bit more on calls, and think more about what’s being said and less about how you look while you’re saying it. You could take a walk, or take the call from wherever you happen to be at the moment, when video's no longer a requirement.

It's part of the enduring appeal of radio shows and podcasts—you can listen and learn while doing other things, without being glued to a screen for yet another hour. Where video ties us down, audio lets us do more, frees us to work the way that fits best without performing for the camera. We can talk on Facetime Audio and Slack calls, all with just a microphone.

Sometimes more isn’t better. Sometimes less data can be more.

*** 

Originally published on the now-defunct Racket blog on February 23, 2021. Microphone photo by Jukka Aalho via Unsplash. Mod I video phone photo via Bonhams.

To say a single thing

A tree in a white, snow-covered field

Fall down the rabbit hole. Focus intently on one thing. That's how you get remembered.

It’s not just anyone who would want to beta-test the first production electric sports car, much less pay $100,000 and wait years for the privilege of doing so.

Yet that was the point. Tesla’s first car wasn’t designed for everyone. It was designed to make a splash. You might not buy it, but you wouldn’t forget it.

As CEO Elon Musk wrote in 2006, Tesla’s “secret” plan was to “Build sports car. Use that money to build an affordable car. Use that money to build an even more affordable car.”

The challenge wasn’t so much in building an electric car; that’d been done well over a hundred years before the original Tesla was even born.

“If you’re trying to build something that’s truly new, you can’t start off by trying to reach a mass market,” said Carver Mead in a New Yorker piece about innovation. Musk would agree.

The challenge was focus.

You just have to say a single thing.

It’s tempting to go big from the start, to make or write or say something that’d appeal to everyone. Do that, though, and you make something everyone’s ok with but no one really wants.

So the UNIX philosophy advised early programmers to “Write programs that do one thing and do it well.” Writer David Perell advises that “writing comes alive at the extremes,” in explaining why to write for an audience of one. “There isn’t one chair for everyone,” says app designer Andy Allen, as explanation for on why we need no boring software.

Whiplash wasn’t made for everyone; few would directly relate to the experience of playing jazz drums. Yet you couldn’t have made a better movie with more well-rounded characters, more topics and ideas to broaden the audience. The focus kept us watching.

And so it goes for cars, for software, for drinks, for books and movies, for podcasts and lectures. There’s no one best product. There are, instead, infinite niches, and you’ll make something far more interesting by focusing on one specific thing.

Every brand’s worst danger is that people won’t notice, won’t care, won’t stop and pay attention. Any press is good press.
So you don’t build a vehicle for everyone. You build one for a specific type of customer, make something they’ll love. Something they’ll talk about. Something that builds a devoted fan base that will broaden over time.

“It’s not risky to sell a service that isn’t for everyone,” wrote Tom Hirst in a Twitter thread about freelancing. “It’s smart.”

TikTok used that to build a new social network where few thought they could. Every other video platform was built around long-form, landscape-oriented videos. TikTok limited you to 15 second-long, portrait videos, something that appealed almost entirely to people who weren’t already videographers. The constraints unlocked new creativity.

Toyota might find it risky to sell vehicles for extreme niches. Tesla in those early days would have found it far riskier to focus on anything other than the niches. They needed something that got people talking; even complaints were better than being ignored.

You too.

TED talks are so popular for two reasons: They’re short enough to watch on a coffee break, and focused on one thing so you know what you’ll learn. You might not take time to listen to any random person rambling for an hour; you’re far more likely to listen to someone tell something specific in 18 minutes. Perhaps they’ve got a lot of other interesting things to say. But you would have never started listening unless they took the time to edit their thoughts down to that one talk.

Cut all the things.

And that, perhaps, is what calls for the hardest work.

“If it is a ten-minute speech it takes me all of two weeks to prepare it; If I can talk as long as I want to it requires no preparation at all. I am ready now,” claimed US President Woodrow Wilson, echoing Blaise Pascal’s “I have made this longer than usual because I have not had time to make it shorter.”

You could say everything on your mind, talk for an hour, write a book out of one idea. Or, you could take Friedrich Nietzsche’s approach, who aspired to “say in ten sentences what others would say in a whole book.”

Editing takes more of your time, but less of others’. It respects your audience’s time.

So you cut. You’ve got a single thing to say—and in the process of unpacking that idea, go ahead and let it all fall out. Fall down the rabbit hole. Then work it together into a cohesive narrative that tells one story, into a product that’s built for one specific thing. Your finished story might tell more than one thing, but each detail should serve the greater point, one idea building on another until your audience walks away with that core concept in mind.

That one single thing.

“Build a better mousetrap,” Ralph Waldo Emerson is often apocryphally quoted, “and the world will beat a path to your door.”

Clayton Christensen of Innovator’s Dilemma quipped the opposite: “Build a worse mousetrap and the world will beat a path to your door”—a nod to the prevalence of lower-cost, technically lower quality upstarts that often disrupt their pricier, “better” competition.

But perhaps the important thing is the mousetrap itself. Build a better widget that does a dozen things, talk all you want about a hundred ways to improve the world with your better mousetrap as point 87, and you’ll be lucky if crickets still hang around your door. It’s the focus on one thing—and making that one thing better in some way, finding something that catches people’s interest about that thing—that gets them to beat a path to your door.

You just have to say a single thing.


Originally published on the now-defunct Racket blog on June 30, 2021. Tree photo by Fabrice Villard via Unsplash.

How TED Talks became 18 minutes long

Bill Gates giving a TED talk

Talk briefly, and carry big slides.


The applause fades, the hush falls, and a moment later you’re engrossed in a topic you might only just be learning about. Iconic enough you could recognize them by sound alone, ubiquitous enough you’ve likely watched dozens already.

Yet it all started as “the dinner party I always wanted to have but couldn’t,” with the 300 or so folks Richard Saul Wurman gathered in Monterey, California for an inaugural conference around _T_ech, _E_ntertainment, and _D_esign.

Thus began TED.

“I'm bored out of my head at conferences,” said the founder behind perhaps the most famous conference. “I hate being spoken to. I hate education.”

So perhaps it’s not surprising his brainchild would become known for pithy speeches, that we’d learn how schools kill creativity and that we’re not ready for the next outbreak and more in 18-minute slots at this re-invented conference.

But it took time to get there.

For the next 90 minutes…

You know the feeling: The clock’s ticking, you’ve got to finish your project or submit your homework or file your taxes by midnight. And somehow, you pull it together in the nick of time and hit Submit without a minute to spare.

Now imagine being asked to speak at a conference, and the host is standing on stage hinting you should wrap things up.

That’s how TED talks and their famous 18-minute limit began.

“Presentations ran as long as 90 minutes,” reported Wired magazine about the first TED conference, scarcely different than the standard conference Wurman disliked.

“One path to true innovation is through subtraction,” Wurman would later remark when thinking back about TED. So, with the goal of wondering “What can I create that isn’t boring,” he shook up the presentations.

Wurman would sit on the platform during the talks. He’d stand up, perhaps when boredom kicked in, start walking toward the presenter, standing “closer and closer to them as their time runs out.” Before long, he’d have six people presenting within the 90 minutes an original speech took.

And so, a general 15 minute-limit on talks was standard by the time he sold the TED conference to Chris Anderson in 2001, prompted by little more than when Wurman felt the talks had covered enough and started to lose interest. But it wasn’t set in stone. “I kind of learned in my first year that 15 was often interpreted as 20 or 25,” remarked Anderson to Charlie Rose years later. “And so the 18 initially came in just as an attempt to be more precise.”

And that’s how TED talks managed to recreate what Anderson called “the ancient campfire experience” with information-dense, focused speeches.

There was no technical reason why 18 minutes was chosen, why 15 or 20-minute speeches couldn’t have been better. Perhaps some other time was optimal. But the original guideline stuck, and what started as Wurman crowding speakers off stage turned into today’s most popular talk limit.

That’s what they said.

And TED’s not alone. If anything, it brought conference speeches closer to the length of more public performances.

Lincoln’s Gettysburg Address was a mere 3 minutes long. Kennedy set a nation’s eye on the moon in 17 minutes. Steve Jobs’ Stanford commencement speech was 3 minutes shorter than a TED talk.

Media found similar limits work well. The BBC thinks 3 minutes is enough to tell eight news stories; The News Manual says 15 minutes is enough time to tell 20 stories “and still treat each story properly.” Perhaps that’s too many stories, and you’d prefer more depth—Late-night comedy agrees while still sticking to 12-20 minute limits for its monologues.

18 minutes, as it turned out for TED talks, was “long enough to be serious and short enough to hold people’s attention,” remarked Chris Anderson after Wurman passed the TED torch. “One of the most common (presentation) killers is a lack of clarity,” a particular problem when you’re trying to build something not boring. The time limit saved the day. You can’t edit a live speech; the time limit forces you to self-edit as the next best thing.

“It has a clarifying effect,” said Anderson. “It brings discipline.”

And it turned into a set of guidelines the TED team would tell potential speakers, to keep talks focused on a single major idea—an idea that’s worth sharing.

The limit wasn’t merely helpful for speakers. It also kept listeners’ attention. For most topics, “People will begin to tune out after approximately 10 minutes," found biologist John Medina in research for his book Brain Rules. Even listeners who continue paying attention can only remember so much, anyhow: A 1995 study by the US Navy found “A 20-minute lecture was equal to the classic 50-minute lecture in terms of information retained.

There’s no perfect speech limit. TED’s 18-minute speech limit isn’t magical. But it does seem that it was a close approximation to the best time limit, that something between 10 and 20 minute-long speeches seems best.

Wurman’s original intuition was correct. The hour-long talks were indeed one of traditional conferences’ downfalls, and cutting it down to under 20 minutes did the trick. It kept speakers focused on one topic, kept listeners listening, and turned talks into something sharable, something you could watch on a break, something perfect to go viral.

And that’s how TED talks became 18 minutes long.


Originally published on the now-defunct Racket blog on June 15, 2021.

In early 2014, Danny Schreiber—then the sole person on Zapier's Editorial team—reached out and asked me to freelance for the Zapier blog. That was my luckiest career break—it started what turned into around five years helping the burgeoning automation platform grow and standardize its content marketing. It all started out with Zapier's app directory and software roundup articles, then branched out into tutorials, software reviews, eBooks, documentation, and more over a half-decade of writing.

And we did it all by being what I liked to call the "unmarketing marketing team." We weren't specifically trying to market Zapier, directly, shouting from the rooftops that you should automate this and that. Instead, we focused on our complements, the tools that people used alongside Zapier. We made our partners' software seem great, and that in turn led people to use their product and Zapier. So when people searched for those tools, odds are Zapier was the first thing that'd show up, and the first thing they'd think of when they wanted to take that product to the next level.

Here's the story of my editorial work at Zapier—how our team helped everyone find the business software they needed then taught them how to get the most out of it.

Continue reading on Ahrefs: How Zapier Built a Content Marketing Machine

Did you know a spreadsheet was originally a sheet of raw rubber?

Or that the US government in World War II made the first documented use of the word spreadsheet to describe a table of numbers?

Spreadsheets, for decades after the PC revolution, were synonymous with Excel, and for good reason. It was the first spreadsheet on the original Macintosh, the first time spreadsheets left DOS and entered the world of windows. And feature by feature, integration by integration, they became an ingrained part of business life, enough that it was impossible to imagine a world without Excel.

That is, until a decade ago, when spreadsheet startups started popping up everywhere.

How do you lose a monopoly? It might start with your product becoming so popular people take it for granted—and then it becomes just another computing abstract, an interface people expect to work everywhere that they wouldn't be surprised to find showing up somewhere new. And then the competition can start showing up.

Here's my take on how Excel started getting competition.

Continue reading on Every: The End of Excel

How to get Google to DKIM authenticate your domain

“Email authentication was not verified.”

DKIM is one of the easiest things to overlook when setting up your domain with Google Workspace already. It’s not required—you can add Gmail to your domain with a few a records and nothing else.

But you don’t want your emails to end up in spam folders. And a DKIM record won’t guarantee anything, but it will make it far more likely your emails will show up in your recipients’ inboxes. It’s worth the few extra minutes to set up.

And yet. Odds are, if you follow Google’s directions to add a DKIM record to your domain, you’ll get the “Email authentication was not verified” error that stumped me for so long.

Here’s how to fix it.

First, go to Google Workspace’s Gmail > Settings > Authenticate Email to get your DKIM key. Here’s where you’ll likely get tripped up.

If you double-click on Google’s TXT record value to copy it, your browser will show that only the DKIM record’s text is selected. If you look closely when you paste the text, though, you’ll notice that the copied text also includes the GENERATE NEW RECORD text from the button below your DKIM code.

So, before pasting your DKIM key into your domain DNS settings, it's worth pasting it into a text editor and making sure the end of the text string is your exact DKIM record, without any extraneous text tagging along at the end.

Then, open your domain registrar’s settings, add the DKIM record as a new TXT record, save your settings, and a day or so later you should be able to verify your domain.

If you still have trouble, check your DKIM Core Key Record on dkimcore, and if it doesn’t verify, try removing the last few characters from the end until you get exactly what Google’s settings page shows—and that should verify. That's how I figured out how to get my DKIM key to work in the first place.

Then, back to emailing, now with a bit more hope that your mail will land in the inbox instead of the spam folder.