Well. Not all of it. Mostly the social sciences and medicine. And I don’t just mean the fact that they consider Freud canon.
It started with a trickle. A retracted paper here. A study that couldn’t be repeated, there.
Then someone decided to get systematic. It opened the floodgates. A study in 2016 showed that 70% of scientists had failed to replicate another scientist’s work, and fully half had failed to reproduce their own work.
Reproducibility is fundamental to the scientific method - it’s supposed to be a study of the natural world, which doesn’t change all that often - so what does its absence mean? Are we incompetent? Can we trust anything? Do we know anything?
The high failure rate of venture-backed startups is its own kind of replication crisis: “How could my company fail? I followed the growth-hacking, blitz-scaling advice from the founders who made it big!” I don’t mean to give blogs and podcasts the weight of peer-reviewed science. But our industry seems to trust them as if they deserve it.
What does it mean if a founder can’t get similar results when following the practices of another?
Science has begun to heal itself. It’s time for startups to go through their own reckoning. Their methods are failing most people. It’s time to learn why and how to get better.
What’s wrong with science?
The crisis in science has multiple, interconnected causes. A lot of them come down to taking techniques from simpler systems and applying them to the far more complex study of humans. The practices useful for studying minerals also worked great on metals, but with people? Not so much.
One of the most famous examples of these studies that fizzle under scrutiny is the marshmallow experiment, conducted at Stanford University in 1972 on the children of students enrolled there. It produced original, important conclusions on the ability of children to endure delayed gratification, and later studies showed that ability was highly correlated to success later in life. Suddenly we’ve got a new tool for understanding how successful you’ll be at a very young age.
Or… maybe not. Further studies showed the original work was actually just exposing the socioeconomic background of the kids. If your family is well off, you are comfortable with delayed gratification and, just coincidentally, are also likely to be well off when you’re older. If you’re from a poor family, delayed gratification is harder to accept and, huh, you’re also more likely to be poor than those kids of rich parents.
Once someone reran the study with a larger group of kids (900 instead of 90) and controlled for socioeconomic background… the effect largely disappeared. It’s not all that surprising that kids with no food insecurity are better at delaying gratification and also will be more successful in life. It certainly doesn’t grab the headlines like announcing that kids who can wait five minutes to eat a marshmallow will earn more money than those who can’t. No HBR article for that one.
It’s been almost fifty years since this study was published. That’s five decades of science based on flawed work, five decades of science that has to be unwound and retried. The longer these mistakes last, the more expensive they are to fix. And like that HBR article above, many conclusions never get retracted.
One particular “technique” has helped trigger the crisis in science. Many a growth-hacking product manager has fallen into the same trap. They can only be rescued through discipline and rigor.
The how and why of P-hacking
Abusing data is a sure way to get bad results. Unlike startups, scientists rarely just make up their data. They make more subtle mistakes, like P-Hacking. This probably sounds pretty cool, but it’s actually a common form of data misuse. Wikipedia describes it this way:
…performing many statistical tests on the data and only reporting those that come back with significant results.
It works like this:
A researcher comes up with an idea for a study. He collects a bunch of data, runs the experiment and… no dice. The idea didn’t pan out.
Hmm. “I have all this data. I can’t just throw it away.”
So he starts slicing the data looking for something that stands out. After a while, sure enough, he finds some correlation that is strong enough to stand up - usually its P-value is under 0.05, and thus considered statistically significant. He publishes this in a paper and looks like a genius. It gets big exposure in the press. Journalists love weird and surprising science. They can report on it without understanding it.
But no one can reproduce the work. The paper gets retracted. He gets uninvited from the big conferences. (Don’t worry. The papers never follow up and publish the retraction.)
What went wrong?
He left out one key piece: How he got the data.
Let’s say he thinks breastfed kids are healthier than bottle-fed kids. He sets up a study that tries to isolate just these variables, which means he wants his population to be reasonably homogenous (similar quality of life, similar locations, etc). Put simply, the difference being researched should be the only material one in the population (unlike in the marshmallow experiment).
He could just toss the data. But, well, he’s already paid to collect it. He’s got all these graduate students who are working nearly for free. He might as well try something. So he puts a student or two on trying to find useful results.
They nearly always do, but… that success kills his work. All those controls to make it work for his original experiment fatally bias it for other studies.
Let’s say he discovers that the study participants who were bottle-fed tended to move around a lot more than people who were breastfed. He concludes, oh, wow, getting bottle-fed causes you to hate your parents and move away. (Yes, this is exactly the kind of headline that would get picked for a result like this.)
He has not proven that. All he has shown is in this particular - probably small, and certainly narrow - data set, that happens to be the case.
He should throw away all existing data. Start from scratch controlling for everything except this new variable under test. Only then can you look for correlations between how a baby was fed and mobility.
But he was too lazy or scared to do that. He found a match in that smaller, biased data set, and then published the results without admitting the problems in either his data or his methods. A few decades ago he would have gotten away with it: A big splashy result on publication, and then everyone just assuming this was true, with no attempt to reproduce and no real questioning of the result.
Today, no chance. Science has developed defenses against this kind of malpractice.
Researchers register with a central database that they are going to study the health of breastfed vs. bottle-fed babies. When they get results, they point to that registration and say, see, this is what led to my data collection.
If they then wanted to publish some other study, people would say, no, you didn’t pre-register this, which makes us suspect you’re p-hacking, so we’re going to do a deep dive on how you got your data. On second thought, we’re just going to reject your paper. Come back when the results hold on a clean dataset.
From social science to startups
This might not initially seem to have anything to do with startups. Product managers and marketers aren’t commissioning studies - and they certainly aren’t controlling for variables!
Hmm. If you look at it a bit funny… Every data-backed marketing campaign and feature launch is an experiment.
Let’s build an analogous example.
A product manager builds a new feature, and because he’s growth hacking, he has lots of telemetry to tell him exactly how people are using it.
His theory is that people will use this new feature in some specific way. But he builds it, ships it, and observes, well, hmm, no, almost no one is using it. It’s a bust. I’m sure you’ve never worked on a project like this, but trust me, it happens.
Except… hey, there’s this small group that is using it, and widely. He looks into it more closely, and realizes they’re using it at 10x the rate people use the rest of the product. So he changes plans, and he rebuilds the feature around the specific thing those few people were doing with it.
Wait, what? No one uses that feature, either, and even worse, the people who originally used it aren’t any more, now that it’s focused on their actual usage!
What went wrong?
You got caught p-hacking
The data set from his failed feature is bad data. He got the most important result: This feature did not work well for his users. He wasn’t willing to let go of failed work. Just like the scientists, he went looking for some other way to reuse it. And instead of developing new hypotheses and running new experiments, he took his biased data and tried to find new correlations cheaply.
Unfortunately for him, he did.
But when he published the new feature, he is faced with a harsh truth: Those few people who were using the feature in unexpected ways don’t look like the rest of his users. A new feature built for that purpose doesn’t help everyone else. And because he relied on data to make his decisions instead of talking to actual users, he learned too late that those unrepresentative users were doing something even more weird. His simplified feature actually removed that weirdness in the name of simplicity that everyone can use.
So now he’s two features in and nothing to show for it. So much for growth-hacking.
How do I fix it?
The solution is very similar to what science has done.
Connect your data to experiments. With discipline. You must get new, clean data for each new test. I know this is anathema to modern data-oriented product management. But it’s the only real way to trust your results.
That word discipline is key. You don’t need to build some international central registry. Whatever your mission statement says, you’re not really saving the world, and you’re not actually doing science. You’re just trying to build a product people love. What you need is rigorous internal practices, and to hold each other accountable so you can’t cheat at statistics.
Unfortunately, this requires you let go of one of Silicon Valley’s most cherished and wrong beliefs.
Experiments fail. This might be an important part of the process, but it’s not very valuable. Congratulations. Of all the possible ways you could fail, you’ve discovered one of them. Don’t let it go to your head.
Don’t work too hard to salvage that failure. You’re p-hacking, and just making it worse. Yes, obviously, you get personal lessons. You might be lucky enough to learn something that triggers your next experiment. But you have to go run that separately.
You can’t build on the detritus of failure.
So my data is now worthless?!
Of course not. I still rely on data for all kinds of problems. One of the great things about building a company today is how easily you can get information at scale.
But never let yourself forget that your data is heavily biased, especially by how it was collected. One of my favorite examples is from when YouTube dramatically reduced response time. Their average response times went up! Suddenly people with much worse connectivity found it worth using, making the average worse. The developers thought they were helping existing users, but the biggest impact was in creating new ones.
You have to recognize your job isn’t to find some way to make the data valuable. Your job is to make high-quality decisions. Use data when you can. If you don’t have data, go get it.
But the job of the data is to inform you, not give you answers. Use it to hone your instinct, to improve your decision-making. When something doesn’t add up, go talk to the actual humans who are the source of the data. And even, spend some time with people not represented in it.
If you’re working at a software startup, you’re not doing science (even if, like me, you have a science degree). But you should still take advantage of its discipline and practices.
Don’t stop at protecting yourself from P-hacking. One founder’s success might be hard to replicate for many reasons. Gain what lessons you can. But don’t blindly trust others’ story of their work.
Because failure on your part won’t be paired with the retraction of a Nature paper, it’ll be an announcement of layoffs in TechCrunch.
Automation is not to blame for all the job destruction and wage stagnation. But you can still do great harm if you build it for the wrong reasons.
We’re told that automation is destroying jobs, that technology is replacing people, making them dumber, less capable. These are lies, with just enough truth to confuse us. You can have my robot washing machines when you pry them from my cold, wet hands.
I’m not some Pollyanna, thinking tech is only ever positive. Its potential for abuse and hurt is visible across the centuries, and especially so today. But I’m more optimistic about the upside than I am pessimistic about the down, and I’m uninterested in scaremongering screeds against it.
And yet. Technology and automation are not forces of nature. They’re made by people. By you. And the choices you make help to determine just how much good or bad they do. Even with the best of intentions, you might be doing great harm. And if you don’t have good intentions at all, or you don’t think ethics are part of your job, then you are probably downright dangerous.
I’m here to convince you that you have a role in deciding the future impact of the technology you build, and to provide you - especially you founders, tool builders, automators - some tactical advice on how to have the best impact, and avoid the dark timeline.
As I was building Puppet, explaining that I was developing automation for operations teams, execs and sales people would think they got it: “Oh, right, so you can fire SysAdmins!”
Ah. No.
When prospective customers asked for this, I offered them a choice: You can keep the same service quality and cut costs, or you can keep the same cost, and increase service quality. For sysadmins, that meant shipping better software, more often.
Their response? “Wait, that’s an option?!” They only knew how to think about their jobs in terms of cost. I had to teach them to think about quality. This is what the whole DevOps movement is about, and the years of DevOps reports Puppet has published: Helping people understand what quality means, so they can stop focusing on cost.
And those few people who said they still wanted to reduce cost, not increase quality? I didn’t sell to them.
Not because they were wrong. There were real pressures on them to reduce costs, but I was only interested in helping people who wanted to make things better, not cheaper. My mission was completely at odds with their needs, so I was unwilling to build a product to help them fire their people.
This might have been stupid. There are good reasons why a CEO might naturally build what these people want. The hardest thing in the world to find for a new product is a motivated prospective customer who has spending authority, and here they are, asking for help. The signal is really clear:
You do a bunch of user interviews, they all tell the same story of needing to reduce cost, and in every case, budgets are shrinking and the major cost is labor. Great, I’ll build some automation, and it will increase productivity by X%, thus enabling a downsizing. The customer is happy, I get rich, and, ah, well, if you get fired you probably deserved it for not investing enough in your career. (I heard this last bit from a founder recently. Yay.)
This reasoning is common, but that does not make it right. (Or ethical.) And you’ll probably fail because of your bad decisions.
Let’s start with the fact that you have not done any user interviews. None.
The only users in this story are the ones you’re trying to fire. Executives aren’t users. Managers aren’t users. It seems like you should listen to them, because they have a lot of opinions, and they’re the ones writing checks, but nope.
This has a couple of consequences. First, you don’t understand the problem if you only talk to buyers, because they only see it at a distance. You have to talk to people on the ground who are doing the work. Be careful when talking to them, though, because you might start to empathize with them, which makes it harder to help fire them.
Even if you do manage to understand the problem, your product will still likely fail. As much as buyers center themselves in the story of adopting new technology, they’re largely irrelevant. Only the people at the front line really matter. I mean, it’s in the word: Users use the software. Someone, somewhere, has to say: Yes, I will use this thing you’ve built, every day, to do my job.
If you’ve only talked to buyers, you have built a buyer-centric product, rather than a user-centric one. Sure, maybe you got lucky and were able to build something pretty good while only talking to managers and disrespecting the workers so much that you think they’re worthless. But I doubt it. You’ll experience the classic enterprise problem of closing a deal but getting no adoption, and thus not getting that crucial renewal. Given that you usually don’t actually make money from a customer until the second or third year of the relationship… not so great.
Users aren’t stupid. Yes, I know we like to act like they are. But they aren’t. If your value promise is, “Adopt my software and 10% of your team is going to get fired,” people know. And they won’t use it, unless they really don’t have a choice. Some of that is selfish - no one wants to help team members get fired, and even if they’re safe today, they know they’re on the block for the next round of cuts. But it’s just as likely to be pragmatic. You’re so focused on downsizing the team that you never stopped to ask what they need. Why would someone adopt something that didn’t solve their problems?
What’s that you say? You ignored their problems because you were focused on the boss’s needs? This is why no one uses your software. Your disrespect resulted in a crappy product.
Call me a communist, but I think most people are skilled at their jobs. I am confident that I can find a learned skill in even the “low skill” labor. I absolutely know I can in most areas people are building software.
I was talking to a friend in a data science group in a software company recently, and he was noting how hard it was to sell their software. He said every prospective buyer had two experts in the basement who they could never seem to get past. So I asked him, are you trying to help those experts, or replace them?
He said, well, our software is so great, they aren’t really necessary any more.
There’s your problem. You’re promising to fire the only two people in the whole company who understand what you do. So I challenged him: What would your product, your company look like if you saw your job as making them do better work faster, rather than eliminating the need for them?
It’s a big shift. But it’s an important one. In his case, I think it’s necessary to reduce the friction in his sales process, and even more importantly, to keep those experts in house and making their employers smarter, rather than moving them on and losing years of experience and knowledge.
The stakes can get much bigger than downsizing. In his new book, Ruined By Design, Mike Monteiro has made it clear that designers and developers make ethical choices every day. Just because Uber’s and Instacart’s business model requires that they mistreat and underpay workers doesn’t mean you need to help them. While I don’t think technology is at fault for most job losses, there absolutely are people out there who see the opportunity to make money by destroying industries.
This is not fundamentally different than the strip mining that happened to corporations in the 1980s, except back then they were making money by removing profit margin in companies and now they’re making money by removing “profit” margin in people’s lives. Jeff Bezos of Amazon has famously said your margin is his opportunity, and his warehouse workers’ experiences makes clear that he thinks that’s as true of his employees as it is of his suppliers and competitors.
Just because they’re going to get rich ruining people’s lives doesn’t mean you have to help.
I think your job matters. I think software can and should have a hugely positive impact on the world; not that one project can by itself make the world better, but that every person could have their life improved by the right product or service.
But that will only happen if we truthfully, honestly try to help our users.
When, instead, we focus too much on margin, on disruption, on buyers, on business problems…. we become the problem.
You can either be a good example or a horrible warning. When it comes to enterprise sales, I had two horrible warnings before I started Puppet.
In 2000, I worked at Bluestar, a business DSL startup in Nashville. Pretty much everything that can go wrong with a startup did with this one: Founder was pushed out the week I started (I swear it wasn’t my fault), they raised too much money ($450m) and then spent it badly (e.g., on hardware that didn’t work and on salespeople that didn’t sell), they brought in a big business CEO who had no idea how to run a growth company, and then the regulatory framework shifted to highly advantage monopolies again so they all went broke. But in the meantime, I got to learn a lot, both about the problems that eventually resulted in my starting Puppet, and also about what does and doesn’t work in business.
At one point, the company decided to buy a new product. I honestly can’t remember what it was for. Something related to asset tracking? Or maybe some kind of operational monitoring software?
I don’t know. I just know I shifted from being a sysadmin to responsible for making it work. I wasn’t part of the team that decided whether to buy something, and if so, which one to buy, I was just designated to put their decisions into action. In the months I worked on it, I don’t think we ever even got it installed anywhere except on a test server, and at some point we just, ah, decided we didn’t need it any more. The project went away, so I returned to my old job. The executive who had made this horrible decision had the gall to say my moving back to my old role was a strike against me, and it would reflect on my tenure at the company. No worries, he was gone the next month.
This wasn’t just a software problem. While the company was slowly dying, they had an argument with EMC over a storage array they never should have purchased. A million dollars of hardware sat in a receiving warehouse for almost a year, because we would not accept it, and EMC would not take it back.
The second warning was during my brief stint at Bladelogic. I worked there for less than six months, but I learned a lot. Again, mostly what not to do. I was ostensibly a product manager, but in practice they just wanted me to maintain their lab and maybe write some justifications for how their product worked. Certainly they did not want to listen to me. My most memorable experience is being in an all-dev-team meeting when the most senior engineer said something like, “What does it matter what the customer thinks? They already bought the product.” Astoundingly, the CTO did not fire him on the spot, and instead just moved on, ignoring the comment entirely.
It was clear Bladelogic’s business model enabled them to just not care what their customers thought. Only_prospects_ mattered. Once the deal was closed, meh, they got paid, no biggie. You literally could not upgrade their software without losing all of your data - you know, the stuff you’re using to build and deploy your whole infrastructure - and doing any real work with the system required that you do everything twice, once to deploy and the second time to update. But you’d never discover that unless you actually used the software, which would be long after their salespeople left, so who cares? Not them.
You can maybe see why I lasted less than six months. It didn’t help that I was commuting between Boston and Nashville, and I’d managed to rent an apartment at the center of a cold vortex in Boston where my roommate collected Grateful Dead grape juice.
So when I started Puppet, I didn’t know much, but I at least had some anti-patterns. I knew we had to care more about our customers successfully using the product than we did about closing the initial deal, and that selling to people who would not use the software was a bad idea.
It turns out, that’s not quite sufficient to develop an effective sales strategy. Who knew?
I was lucky enough to hire the best sales leader in Oregon, who was not only incredibly skilled and experienced, he was also used to entrepreneurs and found me relatively sane compared to bosses he’d had in the past. Where a bunch of our engineers complained every time I opened my mouth, this guy quietly soldiered on. That made our years-long argument much easier to manage.
Early on, I didn’t know enough to break down what I wanted and what I didn’t, or how to talk about the individual behaviors, so I just wrapped up everything I hated and called it “enterprise sales”. We weren’t doing that. Ironically, our sales leader agreed with most of my concerns, so it wasn’t a real fight in the normal sense, but there were multiple areas he was convinced we needed to change, and it’s hard to do that when your ignorant CEO just puts up a ward against the evil eye and changes the subject.
Within a couple of years, he wouldn’t even say the word ’enterprise’, because I would jump down his throat, proverbially speaking.
In the first few years of building Puppet, I tended to focus on preventing sales from skewing our product plans. I wanted to be sure we built products to be used, not sold, and I didn’t trust myself or the team to be able to tell the difference. I think this was basically right, but today, I would know that you should treat ideas from sales like you treat those from customers:
Always listen to what customers tell you, but never do what they say.
The sales team has a limited lens into the product world. They are smart and highly educated about your customer, but that doesn’t automatically translate into good solutions.
This is a general risk at any company with sales teams, but you have an even more pernicious variant with enterprise sales teams: Being confused on who your customer is.
Are you building the product for the person who buys it, or the one who uses it?
Remember back to that product I tried to set up at Bluestar. It was purchased to solve a business problem, and the person who decided to buy it did so based on discussions with sales and, probably, looking very closely at a grid of check marks comparing it to its competitors.1 Actually using it was someone else’s problem.
In fact, I was not going to be the user either - I was supposed to be its administrator. Some other team (support or installation, probably) was going to actually use it. So they were even further from the buying decision.
If you’re selling to the enterprise, getting a deal done requires that you convince the buyer that your product is a winner. That makes them the most important person at the customer. Now, a quality company would also involve users, administrators, and many others in a buying decision, but in the end, buyer decides. Two or three decades ago, these decisions were mostly made on the golf course, so schmoozing was the most important feature. Today, it’s a lot less corrupt, but not a whole lot more functional.
This brings us to the other problem in this separation between user and buyer: Enterprise sales is a team sale, not selling to one user. Suddenly you succeed based on your ability to manage the interpersonal relationships of warring sub-teams at your customer, instead of the strengths of your product. I distinctly remember a dinner with tens of customer employees, and there was almost a flashing DMZ between two teams, who had differing opinions on whether our solutions was the right one. Salesperson quality and experience begin to matter more than anything else, because you’re basically managing internal politics to get a deal done.
Where did the focus on our product go? How do we stay focused on building something our users love?
We don’t, really. It’s hard to sustain an effective a feedback loop that includes sales if they’re focused more on people and politics than products. Not impossible. But hard.
At a big company, you can begin to navigate this kind of cognitive dissonance - listen to your sales team, but don’t build the products they demand. But in the early days of Puppet, I knew I couldn’t handle it. I am not good at dissonance in general - I’m a bit too fond of the idea that there’s just one truth - but I especially knew my organization could not handle it. We needed to be 100% aligned, and that meant sales needed to be working on the same problems as our product teams. Thus, no enterprise sales.
As we got bigger, the other big problem with enterprise sales starts to show up: Wow is it expensive. Lew Cirne of New Relic told me the primary reason he sold Wily when he did is because he needed to $150m just to build out the sales team and it wasn’t worth it.
If you’re doing inside sales, you’ve probably got someone who can talk through most of the product, they can talk to ten or more customers a day, and only once in a while will they pull someone in to help get a deal done. Once you go enterprise, you have field reps who might be covering thousands of square miles of territory, so if you’re lucky they’ll do three meetings a day on average, and they need a sales engineer on almost every visit. They pull in an expensive executive for meetings as often as an inside rep would pull in a cheap sales engineer.
Yes, you can get much bigger deals done this way, but think about the disruption to your organization: Essentially everyone on your leadership team is taking time away from running the business, not to learn from customers but just to make them feel loved enough to write a big check. Your deals start taking nine months to close instead of six weeks, and getting a check signed begins to look more like a challenge level in a video game than a partnership to solve customer problems. And the boss fight of that game is the worst part of enterprise sales: Procurement.
I’m not in the habit of disrespecting roles or teams, and I think procurement is often staffed with experts who play a vital role in their company. But they are generally paid based on how much money they “save” the company. All that discounting that you have to do for enterprise clients? It’s because procurement’s bonus is based on how much of a discount they force you to give. Absolutely everyone knows this is how it works, and that everyone knows this, so it’s just a game. I offer my product for a huge price, you try to force a discount, and then at the end we all compare notes to see how we did relative to market. Neither of us really wants to be too far out of spec; I want to keep my average prices the same, and you just want to be sure you aren’t paying too much.
But because companies compensate procurement based on saving money rather than making good decisions about what to buy, we can sell crappy products at a steep discount but not good products at list price.
It’s a helluva boss fight.
There’s often a miniboss, too: Legal. They just want their pound of flesh, and often this seems more like a puzzle level than a direct fight. I recently saw a deal that had been in legal for a year. That’s too much puzzle for me. (Incidentally, I worked on that same customer more than 4 years ago. Talk about long sales cycles.)
So now you begin to see why I fought against enterprise sales: It encourages you to build the wrong product for the wrong person and then sell it the wrong way at the wrong price.
Why, then, is it so popular? Or rather, why is it so hard to avoid that despite my best efforts we ended up in an enterprise sales motion, which I then ran away from?
Well, first and foremost, if it works it’s incredibly lucrative. For all that Lew Cirne built New Relic in response to his experience at Wily, and pointedly avoided enterprise sales for years, once they went public they went through a dramatic transformation and added it in, because the money was just too appealing. The biggest companies buy the most software, and, well, the biggest companies want to be sold a specific way.
In many cases, you just can’t avoid it. That’s a lot of what happened at Puppet: Our products were built to solve problems that big companies have. Heterogeneous environments, every operating system and application known to man, complex networks, and heavy compliance needs. Turns out it’s rare that a company has all these problems but buys large software products like you buy toilet paper.
Our first deals at companies did tend to look very consumer-like. But once they wanted to expand to other teams, and especially if they wanted to cover the whole company, the relationship naturally switched to a team sale, where we’re having to work with legal, procurement, executives, and then reps from three or four other teams. Ideally someone inside the org is an advocate for our product, so it’s more facilitation than direct selling, but the problem still stands: This is a clear enterprise sale.
But when it works… wow. You start closing $100k deals, then $300k, then $1m, then $10m. This starts to add up.
And for all that I’ve said this is hard… it’s actually the easiest way to sell.
What’s actually hard is having the best product, and only ever winning based on merit. Enterprise sales is the default motion, and in many cases it’s chosen to paper over weaknesses in the product. After all, only the user would actually notice those; in a meeting with the CIO, procurement, legal, and project management, no one’s going to install the product and give it a runout.
We’re still super early as an industry in our understanding of how to build a product that doesn’t rely on enterprise sales. For all that Atlassian relies more on sales than it has said, there’s no question that they managed to avoid an enterprise selling motion. I’m hoping the next generations of software companies will learn from them instead of Workday.
In the meantime, hopefully this story of how I fought enterprise sales, and why, will help you make better decisions about how to build your own teams. At the least, maybe I can just be a horrible warning.
These feature check lists are bad ideas. Don’t trust them as a user, don’t make them as a product marketer. ↩
Look, I have to say it: You’re weird. Even if I don’t know you, I’m confident: Somewhere, maybe lurking deep inside, something about you is just not right. I don’t know what, specifically. For all I know, you might be one of those weirdos whose particular strangeness is just how authentically normal you are. shudder.
This might be insulting to you, calling you weird. It happens a lot: I think I’m complimenting someone and they get all huffy. Conversely, people are often afraid I’ll be hurt when they shyly let me know that I, ah, don’t really fit. Don’t worry; you’d need to know me a lot better to successfully offend me.
Society is not a huge fan of weirdness - I mean, the definition is pretty much, “does not fit into society” - and it trains you away from it. We’re social animals, so you probably do what you can to conceal, or at least downplay, anything different. It makes sense. It’s a basic survival mechanism.
I know I do it. I can’t hide everything - some stuff just can’t be covered up - but I can usually skate through a conversation or two before people back up a step and give me that funny, sometimes frightened, look. Being on the west coast helps; I’m a little less weird here than I was in the south. It probably also helps that I cut my mohawk, and the spiked leather jacket and knee high boots stay in the closet now.
I’ve written a bit about my struggles to balance authenticity and fitting in. I think it’s important to call out it out, because those who experience this struggle rarely have the luxury of admitting it. I’m lucky enough in multiple ways that I can be up front about it now. But resolving this conflict matters for more than psychological reasons. Our own goals usually require that we learn to embrace our weird. Not just grab on to it, actually, but really live in it. Inhabit it.
That weirdness is how we win.
This is easiest to show in investing. We have a natural tendency to do what is proven to work, but that is only assured of getting “market” - in other words, mediocre - returns. If you study the best investors, they’re all doing something that seems weird. Or at least, it did when they started. The first people who paid to string fiber from NYC to Chicago to make trades a couple milliseconds faster were considered pretty weird, but they knew the truth: Normal behavior gets normal returns, anything more requires true weirdness. (Well, or fraud. There’s always that if you’re afraid to stand out.)
It’s the same way in life. You can’t say you want something different, you want to be special, but then follow the same path as everyone else. “I’ll embrace what makes me special just as soon as I get financial security via a well-trodden path to success.” Oh yeah. We definitely believe that.
There’s a nice sleight of hand you can do, where you can say you’re doing something different, but really you’re a rare form of normal. The first few doctors and nurses were really weird. Those who recommended you wash hands before surgery were literally laughed at, considered dangerous crackpots1. But now? Most people become a doctor in pretty much the same way. Being a doctor is normal now, even if it’s not common. That’s probably good.
But what if your job is innovation? What if you’re whole story revolves around being different? Can you still follow a common path?
Because that’s what too many entrepreneurs today are doing: Trying to succeed at something different, by doing what everyone else is doing.
I mean. Not literally everyone else. But close enough.
It starts out innocently enough. There aren’t many people starting tech companies at first, and boy howdy are they weird. Someone makes a ton of money, all their weirdness gets written up - “hah hah, see how he has no sense of humanity but is somehow still a billionaire?” - and now we’ve got something to compare to. Hmm. Well. We can’t consistently duplicate Jobs, Gates, Packard. But if we tell enough stories enough times, we find some kind of average path through them. Ah! Enlightenment!
Now that we know what “most” people do, we can try it too. I mean, we have no idea if the stories about those people have anything to do with why they succeeded, but why let that get in our way? Conveniently, every time it works we’ll loudly claim success, but silently skip publishing any failures. Just ask Jim Collins: He got rich by cherry-picking data in Good to Great to “prove” there was a common path to business success. It turned out to have as much predictive value as an astrological reading, and is just business garbage dressed up in intellectual rigor, but that doesn’t seem to have hurt him.
The business world keeps buying his books. They need to believe there’s a common path that anyone can travel to victory. Otherwise, what would they sell? What would they buy?
Obviously this doesn’t work. There is no standard playbook to winning an arms race. Once there’s even a sniff of one, people copy it until it doesn’t work any more. This is pretty much the definition of the efficient market hypothesis: There’s no standard way to get above-average results. Once Warren Buffet got sufficiently rich as a value investor, so many people adopted the strategy that, well, it’s hard to make money that way. Not impossible, but nowhere near as easy as it was fifty years ago.
Of course, you can go too far in being weird. There has to be something in your business, in your strategy, that makes you different enough that you just might win. But adding a lot of other strangeness for no good reason worsens already long odds. The fact that Steve Jobs did so well even though he was a raging asshole, even to his best friends, made his success just that much less likely. Most people are a bit more like Gates and Bezos: Utterly ruthless in business, and caring not a whit for the downsides of their success, but perfectly capable of coming off as a decent person whenever required.
I’m rarely accused of being a world-class jerk, but I don’t pass the smell test as normal for very long. Jim Collins might say maybe if I were more pathological I would have succeeded more. With Jobs and Musk as examples, it seems reasonable, right? In truth, it’s just as reasonable that I would have done better by dropping out of Reed College, like Jobs did, rather than foolishly graduating from it. Think it’s too late to retroactively quit early?
Yes, you have to learn to love your weird, but it shouldn’t be arbitrary. You can’t realistically say that you’re going to rock it in business because you’re addicted to collecting gum wrappers from the 50s. I agree that that’s weird, but is it usefully so? Being a jerk is weird, and bad, but it’s not helpfully so. And really, dropping out of college isn’t that weird for someone in Jobs’s financial position at the time. It’s only if you have a bunch of money that it seems so.
I recommend you take the time, think deeply on what opinions you hold that no one else seems to, what beliefs you have that constantly surprise you by their lack in others. What do you find easy that others find impossible? What’s natural to you, but somewhere between confounding and an abomination to those who notice you doing it?
Those things aren’t all good. And in many cases, you’ll need to spend your entire professional life managing their downsides, like I have. But somewhere in that list is what sets you apart, what gives you the opportunity to truly stand out. They’re the ground you need to build your future on.
Unless you just want to be normal. In that case, I don’t think I can help you.
This is an amazing example of sexism. The doctor’s wards had three times the fatality rates of the midwife wards, but of course, they were doing nothing wrong at all. ↩
Being an advisor to other founders is a contradictory affair: Be helpful, but do not give advice. That is, I want to help you do your best work, but I don’t think I can or should do it by telling you what to do or think.
I obviously think I have value to add or I would not sign up to help. Well, maybe it’s not obvious; our industry is rife with advisors who attach their names and little else to projects. It’s true I’m motivated to join partly by the possible long-term reward, but mostly I’m helping because I enjoy it and am learning a lot.
While running Puppet, I was constantly confronted with a classic leadership struggle: How do I simultaneously help people improve their own answers, yet get them to do what I want? There are many who will say this is a false struggle, that I could have avoided it by focusing on empowering people instead of trying to get them to do what I wanted. Pfft. The literal definition of leadership is providing direction and getting people there, and that’s doubly so for a fast-growing startup where alignment is critical to execution. I spent a decade slowly, incrementally, getting better at this, but felt my incompetence as keenly at the end as I did at the beginning.1
Advising companies allows me to practice the empowerment-half of this skill without the other complications. Unlike when I was a CEO, I know I should not be setting direction or making decisions. My job is not to provide answers, but to help people do their own best work.
My only explicit training for this was when I was an organic chemistry lab tech in in college. My primary task was repeating questions back to the students: “I don’t know, which layer do you keep?"2 When I started dating my now-wife in college, she told me her friends were bitter that I would not give them answers. I knew my job. I was there to help them get an education, which required they did the work on their own. This has also been helpful experience for being a parent: “I don’t know, what is 12 times 9?”
Advising CEOs has similar constraints, but it’s a lot more open-ended, and has no answer sheet. In the lab, there was one right answer, it was always the same, and you could reason it out with the information at hand. Labs were also usually a day of work, maybe three days, and mistakes were pretty cheap, in the grand scheme of things. I don’t expect one of those students to track me down later in life and lay at my feet all of their struggles or successes. Most importantly, we were studying an objective space that I did actually know more about. When push came to shove, I knew the answers, and I could reason out anything that wasn’t obvious.
Helping CEOs is considerably harder. I’m rarely asked about questions that have a single right answer. No competent CEO would bother getting advice on an easy question, or one whose answer wasn’t important. Wrap into this the fact that I can’t possibly know the company as well as the person asking me the question.3 It’s inconceivable that I would often have answers available that the expert in the seat doesn’t.
That simplifies the challenge: Prod the questioner into getting to their own answer, no matter how much they complain. And they do sometimes get upset: I had a CEO exasperatedly demand what I would do, after a long session of forcing him to work through what he cared about, what he saw as the right answer. When I relented - only after he had already done all the hard work - he could see how thin and useless my answer was. By the time he’d decided what to do, he saw that what he learned from the process was at least as important as the answer, and my just providing a solution could never give that.
There is still some risk. I’m by no means a master of this technique. I know I have at times presented people’s options in stark ways, which sometimes felt like no choice at all. My own predilections, such as toward a consumer-style sales model, are hard to separate from any guidance I might provide. It’s honestly just hard to know sometimes whether you’re successfully getting someone to express their own implicit belief or leading them to agree with one of yours.
It’s a skill I expect to spend the rest of my life trying to master. But it’s worth doing, and I’m enjoying the learning process.
Helping CEOs instead of running my own company provides a kind of repeatable laboratory environment. I get to learn at the same time, though, because it’s much harder than being a lab tech.
It’s not enough to just parrot questions back. I spend my time listening closely and drawing out more information, then replaying back what I heard. Listening is a woefully underrated skill. I’ve been loving the opportunity to practice really hearing what people are saying, and trying to differentiate between the words they use, the meaning behind them, and their intent in saying it at all.
As you look for advisors, be sure you demand the same discipline from them. Don’t accept answers. They should hear you, understand your dilemma, and be able able to point out where you haven’t thought completely, or clearly.
A great advisor should provide light, not direction.
If this whole definition of leadership annoys or offends you, I’d ask how you differentiate between leadership and management, and also how you expect a company to align around a direction without someone picking the direction. ↩
Nearly every experiment in organic chemistry involves using liquids to separate chemicals, where part of the solution ends up in an aqueous (watery) layer, and the other ends up in another layer, like separated oil and vinegar in salad dressing. One of those layers is now waste, and the other one has the chemical you’re working on. Don’t throw away the wrong one! ↩
This is another big difference from when I was the leader; I knew Puppet itself better than anyone, even if I could not know your specific area as well. ↩
I’m a tech founder and recovering SysAdmin. I helped found DevOps and grew up in open source. These days I am convalescing from a decade of physical and mental neglect while running Puppet.
Read more