[EQIX – Equinix; INXN – Interxion] Network Effects in a Box
SAMPLE POSTS,[EQIX] Equinix,[INXN] Interxion Holdings |
The “internet”, as the name implies, is a network of networks. Scuttleblurb.com is sitting on server somewhere connected to an IP network different from the one your device is connected to and the fact that you are reading this means those two networks are communicating. Likewise, if your internet service provider is Charter and you’d like to send an email to a friend whose ISP is Comcast, the networks of Charter and Comcast need a way to trade data traffic (“peer” with one another). In the nascent days of the internet, different networks did this at Network Access Points established and operated by non-profits and the government. Over time, large telecom carriers, who owned the core networks, took control of coordinating peering activity and small carriers that wished to exchange traffic with them were forced to house switching equipment on their premises.
Eventually, most peering agreements moved to “carrier-neutral” Internet Exchange Points (“IXs” or “IXPs”, data centers like those owned and/or operated by Equinix and Interxion) that were independent of any single carrier. Today, global carrier neutral colocation/interconnection revenue of $15bn exceeds that of bandwidth provider colo by a factor of two as major telcos have seen their exchange businesses wither. At first, service providers landed hooks at these neutral exchange points…and then came the content providers, financial institutions, and enterprises, in that order. Customers at these neutral exchange points can connect to a single port and access hundreds of carriers and ISPs within a single data center or cluster of data centers. Alternatively, a B2B enterprise that wants to sync to its partners and customers without enduring the congestion of public peering [on a “shared fabric” or “peering fabric”, where multiple parties interconnect their networks at a single port] can establish private “cross-connects”, or cables that directly tether its equipment to that of its customers within the same DC [in an intracampus cross connect, the DC operator connects multiple datacenters with fiber optic cables, giving customers access to customers located in other DC buildings].
[To get a sense of how consequential network peering is to experiencing the web as we know it today, here’s an account of a de-peering incident, as told by Andrew Blum in his book Tubes: Behind the Scenes at the Internet:
“In one famous de-peering episode in 2008, Sprint stopped peering with Cogent for three days. As a result, 3.3% of global Internet addresses ‘partitioned’, meaning they were cut off from the rest of the Internet…Any network that was ‘single-homed’ behind Sprint or Cogent – meaning they relied on the network exclusively to get to the rest of the Internet – was unable to reach any network that was ‘single-homed’ behind the other. Among the better-known ‘captives’ behind Sprint were the US Department of Justice, the Commonwealth of Massachusetts, and Northrop Grumman; behind Cogent were NASA, ING Canada, and the New York court system. Emails between the two camps couldn’t be delivered. Their websites appeared to be unavailable, the connection unable to be established.”]
The benefits of colocation seem obvious enough. Even the $2mn+ of capital you’re laying out upfront for a small 10k sf private build is a pittance compared to the recurring expenses – taxes, staff, and maintenance, and especially power – of operating it. You won’t get the latency benefits of cross-connecting with customers, you’ll pay costly networking tolls to local transit providers, and you’re probably not even going to be using all that built capacity most of the time anyhow. Why be a loner? As my wife tells me at parties, go where the people are.
There’s this theory in urban economics called “economies of agglomeration”, which posits that firms in related industries achieve scale economies by clustering together in a confined region, as the concentration of related companies attracts deep, specialized pools of labor and suppliers that can be accessed more cost effectively when they are together in one place, and results in technology and knowledge spillovers. For instance, the dense concentration of asset managers in Manhattan attracts newly minted MBAs looking for jobs, service providers scouring for clients, and management teams pitching debt and equity offerings. Analysts at these shops can easily and informally get together and share ideas. This set of knowledge and resources, in turn, compels asset managers to set up shop in Manhattan, reinforcing the feedback loop.
I think you see where I’m going with this. The day-to-day interactions that a business used to have with its suppliers, partners, and customers in physical space – trading securities, coordinating product development, placing an order, paying the bills – have been increasingly mapped onto a virtual landscape over the last several decades. Datacenters are the new cities. Equinix’s critical competitive advantage, what separates it from being a commodity lessor of power and space, resides in the network effects spawned by connectivity among a dense and diverse tenant base within its 180+ data centers. You might also cite the time and cost of permitting and constructing a datacenter as an entry barrier, and this might be a more valid one in Europe than in the US, but I think it’s largely besides the point. The real moat comes from convincing carriers to plug into your datacenter and spinning up an ecosystem of connecting networks on top.
The roots of this moat extend all the back to the late ’90s, when major telecom carriers embedded their network backbones into datacenters owned by Interxion, Telx [acquired by Digital Realty in October 2015], and Equinix, creating the conditions for network effects to blossom over the ensuing decade+: a customer will choose the interconnection exchange on which it can peer with many other relevant customers, partners, and service providers; carriers and service providers, in virtuous fashion, will connect to the exchange that supports a critical mass of content providers and enterprises. Furthermore, each incremental datacenter that Equinix or Interxion builds is both strengthened by and reinforces the existing web of connected participants in current datacenters on campus, creating what are known as “communities of interest” among related companies, like this (from Interxion):
[The bottom layer, the connectivity providers, used to comprise ~80% of INXN’s revenue in the early 2000s]
So, for instance, and I’m just making this up, inside an Interxion datacenter, Netflix can manage part of its content library and track user engagement by cross-connecting with AWS, and distribute that content with a high degree of reliability across Europe by syncing with any number of connectivity providers in the bottom layer. In major European financial centers, where Interxion’s datacenter campuses host financial services constituents, a broker who requires no/low latency trade execution and data feeds can, at little or no cost, cross-connect with trading venues and providers of market data who are located on the same Interxion campuses. Or consider all the parties involved in an electronic payments transaction, from processors to banks to application providers to wireless carriers, who must all trade traffic in real time. These ecosystems of mutually reinforcing entities have been fostered over nearly 20 years and are difficult to replicate. Customers rely on these datacenters for mission critical network access, making them very sticky, as is evidenced by Equnix’s MRR churn rate of ~2%-2.5%.
[Here’s a cool visual from Equinix’s Analyst Day that shows how dramatically its Chicago metro’s cross-connects have proliferated over the last 6 years, testifying to the network effects at play. Note how thick the fibers on the “Network” cell wall are. Connectivity providers are the key.]
The carrier-rich retail colocation datacenters that I refer to in this post differ from their more commodified “wholesale” cousins in that the latter cater to large enterprises that lease entire facilities, design and construct their architectures, and employ their own technical support staff. Retail datacenters with internet exchanges, meanwhile, are occupied by smaller customers who lease by the cabinet, share pre-configured space with other customers, and rely on the DC’s staff for tech support. But most critically, because wholesale customers primarily use DCs for space and power rather than connectivity, they do not benefit from the same network effects that underpin the connectivity-rich colo moat. It is the aggregation function that gives rise to a fragmented customer base of enterprises, cloud and internet service providers, and system integrators (Equinix’s largest customer accounts for less than 3% of monthly recurring revenue) and allows the IX colo to persistently implement price hikes that at least keep up with inflation.
This is not the case for a wholesale DC provider, who relies on a few large enterprises that wield negotiating leverage over them. DuPont Fabros’ largest customer is over 25% of revenue; it’s second largest accounts for another 20%. A simple way to see the value differentiation between commodity wholesale and carrier-rich retail data center operators is to simply compare their returns on gross PP&E over time.
EBITDA / BoP Gross PPE
[There are also retail colos without internet exchange points that deliver more value for their customers than wholesale DCs but less compared to their IX retail brethren, and some DCs operate a hybrid wholesale/retail model as well. It’s a spectrum]
So you can see why wholesale DCs have been trying to break into the IX gambit organically, with little success, for years. Digital Realty, which today gets ~15% of its revenue from interconnection, bought its way into the space through its acquisition of Telx in October 2015, followed up by its acquisition of a portfolio of DCs from Equinix in July 2016.
The secular demand drivers are many…I’m talking about all the trends that have been tirelessly discussed for the last several years: enterprise cloud computing, edge computing, mobile data, internet of things, e-commerce, streaming video content, big data. These phenomena are only moving in one direction. We all know this and agree.
But it’s not just about the amount of data that is generated and consumed, but also about how. The ever hastening pace and competitiveness of business demand that companies have access to applications on whatever device wherever they happen to be; the data generated from their consumption patterns and from the burgeoning thicket of IoT devices need to, in turn, be shot back to data center nodes for analysis and insight. And the transfer of data to and from end users and devices to the datacenters needs to happen quickly and cheaply. Today, the typical network topology looks something like this…
…a hub-and-spoke model where an application traverses great lengths from a core datacenter located somewhere in the sticks to reach end users and data is then backhauled from the end user back to the core. This is expensive, bandwidth-taxing, slow, and because it is pushed over the public internet, sometimes in technical violation of strict security and privacy protocols. You can imagine how much data a sensor-laden self-driving car generates every minute and how unacceptably long it would take and how expensive it would be to continuously transfer it all back to the core over a 4G network. Instead, the IT network should be reconfigured to instead look like this…
…a widely-distributed footprint of nodes close to end user/device, each node hosting a rich ecosystem of networks and cloud partners that other networks and cloud partners care about, pushing and pulling bits to and from users more securely and with far less latency vs. the hub/spoke configuration. Microsoft clearly shares the same vision. Satya Nadella on MSFT’s fiscal 4q conference call:
“So to me that’s what we are building to. It’s actually a big architectural shift from thinking purely of this as a migration to some public cloud to really thinking of this as a real future distributed computing infrastructure and applications…from a forward looking perspective I want us to be very, very clear that we anticipate the edge to be actually one of the more exciting parts of what’s happening with our infrastructure.”
Something to consider is that while distributed computing appears to offer tailwind for the IX colos, it can have existential whiffs when pushed to the extreme. Is it really the long-term secular trend that EQIX management unequivocally proclaims it to be? Or is it just a processing pit stop for workloads that are inexorably inching their way further to the edge, to be directly manipulated by increasingly intelligent devices?
Consider that this past summer, Microsoft released Azure IoT Edge, a Windows/Linux solution that enables in-device AI and analytics using the same code running in the cloud. To draw an example from Microsoft’s Build Developer conference, Sandvik Coromant, a Swedish manufacturer of cutting tools, already has machines on its factory floors that send telemetry to the Azure cloud, where machine learning is applied to the data to predict maintenance needs and trigger preemptive machine shutdowns when certain parameters are tripped. But with Azure IoT Edge, that logic, and a whole menu of others that used to reside solely in the cloud, can now be ported directly to the machines themselves. The process loop – sending telemetry from the device to the cloud, analyzing it, and shooting it back down to the device – is obviated, cutting the time to decommission a faulty machine from 2 seconds down to ~100 milliseconds. While this seems like the cloud node is rendered inert, note that the algorithms are still developed and tested in the data center before being exported to and executed on the device…and even as in-device local AI becomes more sophisticated, the data deluge from burgeoning end nodes will still need to be synced to a centralized processing repository to more intensively train machine learning algorithms and generate predictive insights that are more expansive than can be derived locally.
But there is also the fear that as enterprises consider moving workloads off-premise, they bypass hybrid [public + private colocated or on-premise cloud services] and host mostly or entirely with a public hyperscale vendor (AWS, Azure, Google Cloud) [a colocated enterprise brings and maintains its own equipment to the datacenter, whereas a public cloud customer uses the equipment of the cloud provider] or that current hybrid enterprises migrate more and more workloads to the public cloud…or that public cloud vendors build out their own network nodes to host hybrid enterprises. But by all accounts, Equinix is in deep, mutually beneficial partnership with Cloud & IT services customers (AWS, Google, Azure, Box, SaaS companies), who have been the most significant contributors to Equinix’s monthly recurring revenue (MRR) growth over the last several years. The hyperscalers are relying on connectivity-rich colos like Equinix and Interxion to serve as their network nodes to meet latency needs on the edge.
There are 50 or so undersea cable initiatives in the world today that are being constructed to meet the proliferating amount of cross-border internet traffic, which has grown by 45x over the last decade. These subsea projects are being funded not by telecom networks as in days past, but by the major public cloud vendors and Facebook, who are landing many of those cables directly on third party interconnection-rich colos that host their web services.
Cloud & IT customers comprise half the Equinix’s top 10 customers by monthly recurring revenue (MRR), operate across all three of the company’s regions (America, EMEA, and APAC) in, on average, 40 of its datacenters [compared to 4 of the top 10 operating in fewer than 30 datacenters, on average, just a year ago]. The number of customers and deployments on Equinix’s Performance Hub, where enterprises can cross-connect to the public clouds and operate their private cloud in hybrid fashion, has grown by 2x-3x since 1q15, while 50%+ growth in cross-connects to cloud services has underpinned 20% and 14% recurring revenue CAGRs for Enteprise and Cloud customers, respectively, over the last 3 years.
Still another possible risk factor was trumpeted with great fanfare during CNBC’s Delivering Alpha conference last month by Social Capital’s Chamath Palihapitiya, who claimed that Google was developing a chip that could run half of its computing on 10% of the silicon, leading him to conclude that:
“We can literally take a rack of servers that can basically replace seven or eight data centers and park it, drive it in an RV and park it beside a data center. Plug it into some air conditioning and power and it will take those data centers out of business.”
While this sounds like your standard casually provocative and contrived sound-bite from yet another SV thought leader, it was taken seriously enough to spark a sell-off in data center stocks and put the management teams of those companies on defense, with Digital Realty’s head of IR remarkting to Data Center Knowledge:
“Andy Power and I are in New York, meeting with our largest institutional investors, and this topic has come up as basically the first question every single meeting.”
To state the obvious, when evaluating an existential claim that is predicated upon extrapolating a current trend, it’s often worth asking whether there is evidence of said trend’s impact today. For instance, the assertion that “intensifying e-commerce adoption will drive huge swaths of malls into extinction”, while bold, is at least hinted at by moribund foot traffic at malls and negative comps at mall-based specialty retailers over the last several years. Similarly, if it is indeed true that greater chip processing efficiency will dramatically reduce data center tenancy, it seems we should already be seeing this in the data, as Moore’s law has reliably held since it was first articulated in the 1970s, and server chips are far denser and more powerful today than they were 5-10 years ago. And yet, we see just the opposite.
Facebook, Microsoft, Alphabet, and Amazon are all accelerating their investments in datacenters in the coming years – opening new ones, expanding existing ones – and entering into long-term lease agreements with both wholesale and connectivity colo datacenter operators. Even as colocation operators have poured substantial sums into growth capex, utilization rates have trekked higher. Unit sales of Intel’s datacenter chips have increased by high-single digits per year over the last several years, suggesting that the neural networking chips that CP referred to are working alongside CPU servers, not replacing them.
It seems a core assumption to CP’s argument is that the amount of data generated and consumed is invariant to efficiency gains in computing. But cases to the contrary – where efficiency gains, in reducing the cost of consumption, have actually spurred more consumption and nullified the energy savings – are prevalent enough in the history of technological progress that they go by a name, “Jevons paradox”, described in this The New Yorker article from December 2010:
In a paper published in 1998, the Yale economist William D. Nordhaus estimated the cost of lighting throughout human history. An ancient Babylonian, he calculated, needed to work more than forty-one hours to acquire enough lamp oil to provide a thousand lumen-hours of light—the equivalent of a seventy-five-watt incandescent bulb burning for about an hour. Thirty-five hundred years later, a contemporary of Thomas Jefferson’s could buy the same amount of illumination, in the form of tallow candles, by working for about five hours and twenty minutes. By 1992, an average American, with access to compact fluorescents, could do the same in less than half a second. Increasing the energy efficiency of illumination is nothing new; improved lighting has been “a lunch you’re paid to eat” ever since humans upgraded from cave fires (fifty-eight hours of labor for our early Stone Age ancestors). Yet our efficiency gains haven’t reduced the energy we expend on illumination or shrunk our energy consumption over all. On the contrary, we now generate light so extravagantly that darkness itself is spoken of as an endangered natural resource.
Modern air-conditioners, like modern refrigerators, are vastly more energy efficient than their mid-twentieth-century predecessors—in both cases, partly because of tighter standards established by the Department of Energy. But that efficiency has driven down their cost of operation, and manufacturing efficiencies and market growth have driven down the cost of production, to such an extent that the ownership percentage of 1960 has now flipped: by 2005, according to the Energy Information Administration, eighty-four per cent of all U.S. homes had air-conditioning, and most of it was central. Stan Cox, who is the author of the recent book “Losing Our Cool,” told me that, between 1993 and 2005, “the energy efficiency of residential air-conditioning equipment improved twenty-eight per cent, but energy consumption for A.C. by the average air-conditioned household rose thirty-seven per cent.”
And the “paradox” certainly seems apparent in the case of server capacity and processing speed, where advances have continuously accommodated ever growing use cases that have sparked growth in overall power consumption. It’s true that GPUs are far more energy efficient to run than CPUs on a per instruction basis, but these chips are enabling far more incremental workloads than were possible before, not simply usurping a fixed quantum of work that was previously being handled by CPUs.
Where things start to become potentially ominous is when we move beyond incremental improvements in processing and consider a parallel shift in computing power. From the Wall Street Journal:
“…computer scientists say today’s most powerful laptops are closer to abacuses than quantum computers. The computing power of a data center stretching several city blocks could theoretically be achieved by a quantum chip the size of the period at the end of this sentence…
Comparing bits to qubits is facile because quantum and classical computers are fundamentally different machines. Unlike classical computers, quantum computers don’t test all possible solutions to a problem. Instead, they use algorithms to cancel out paths leading to wrong answers, leaving only paths to the right answer—and those algorithms work only for certain problems. This makes quantum computers unsuited for everyday tasks like surfing the web, so don’t expect a quantum iPhone. But what they can do is tackle specific, unthinkably complex problems like simulating new molecules to engineer lighter airplane parts, more effective drugs and better batteries.
Quantum computers are also subject to high error rates, which has led some scientists and mathematicians to question their viability…
The primary impetus in the race for quantum computers is the potential to upend industries. Experts believe their biggest near-term promise is to supercharge machine learning and AI, two rapidly growing fields—and businesses. Neven of Google says he expects all machine learning to be running on quantum computers within the decade.”
“I believe in the next five to 10 years we will have 100-plus-qubit machines that will be available to anyone, and this will be when useful applications will be found,” Monroe [a professor at the University of Maryland who studies quantum information theory] says. “My guess is that useful quantum applications will only be found once we build quantum machines that can be used by people who know about difficult problems in logistics, economic markets, pattern recognition, and modeling of materials.”
[By comparison, IBM has announced a 16-qubit machine and Google is rumored to have one with 22-qubits]
My vast ignorance on this topic is just about the only thing I’m sure qubits couldn’t process. I have no idea how to handicap the rate at which quantum computing will achieve stable commercial relevance or even understand anywhere near the full scope of commercial applications for that matter. However, it sounds exciting and frightening and relevant to this discussion, so I thought I’d flag it. I’m basically taking a head-in-the-sand approach to quantum computing until it congeals into something with more immediate real world import.
So, let me tuck in my tail and retreat to more familiar ground. With all this talk around chip speed, it’s easy to forget that the core value proposition offered by connectivity-rich colos like EQIX and INXN is not processing power but rather seamless connectivity to a variety of relevant networks, service providers, customers, and partners in a securely monitored facility with unimpeachable reliability. When you walk into an Equinix datacenter, you don’t see infinity rooms of servers training machine learning algorithms and hosting streaming sites, but rather cabinets housing huge pieces of switching equipment syncing different networks, and overhead cable trays secured to the ceiling, shielding thousands of different cross-connects.
The importance of connectivity means that the number of connectivity-rich datacenters will trend towards but never converge to a number that optimizes for scale economies alone. A distributed topology with multiple datacenter per region, as discussed in this post and outlined in this article, addresses several problems, including the huge left tail consequences of a single point of failure, the exorbitant cost of interconnect in regions with inefficient last-mile networks, latency, and jurisdictional mandates, especially in Europe, that require local data to remain within geographic borders. Faster chips do not solve any of these problems.
An IX data center operator leases property for 10+ years and enters into 3-5 year contracts embedded with 2%-5% price escalators with customers who pay monthly fees for rent, power, and interconnection fees that comprise ~95% of total revenue. A typical new build can get to be ~80% utilized within 2-5 years and cash flow breakeven inside of 12 months. During the first two years or so after a datacenter opens, the vast majority of recurring revenue comes from rent. But as the datacenter fills up with customers and those customers drag more and more of their workloads to the colo and connect with other customers within the same datacenter and across datacenters on the same campus, power and cross-connects represent an ever growing mix of revenue such that in 4-5 years time, they come to comprise the majority of revenue per colo and user.
The cash costs at an Equinix datacenter break down like this:
% of cash operating costs at the datacenter:
So, roughly half of the costs – labor, rent, repairs, ~half of “other” – are fixed.
If you include the cash operating costs below the gross profit line [cost of revenue basically represents costs at the datacenter level: rental payments, electricity and bandwidth costs, IBX data center employee salaries (including stock comp), repairs, maintenance, security services.], the consolidated cost structure breaks down like this:
% of cash operating costs of EQIX / % of revenue / mostly fixed or variable in the short-term?
With ~2/3 of EQIX’s cost structure practically fixed, there’s meaningful operating leverage as datacenters fill up and bustle with activity. Among Equinix’s 150 IBX datacenters (that is, datacenters with ecosystems of businesses, networks, and service providers), 99 are “stabilized” assets that began operating before 1/1/2016 and are 83% leased up. There is $5.7bn in gross PP&E tied up in those datacenters which are generating $1.6bn in cash profit after datacenter level stock comp and maintenance capex (~4% of revenue), translating into a 28% pre-tax unlevered return on capital.
Equinix is by far the largest player in an increasingly consolidated industry. It got that way through a fairly even combination of growth capex and M&A. The commercial logic to mergers in this space comes not just from cross-selling IX space across a non-overlapping customer base and taking out redundant SG&A, but also in fusing the ecosystems of datacenters located within the same campus or metro, further reinforcing network effects. For instance, through its acquisition Telecity, Equinix got a bunch of datacenters that were adjacent to its own within Paris, London, Amsterdam, and Frankfurt. By linking communities across datacenters within the same metros, Equinix is driving greater utilization across the metro as a whole.
While Equinix’s 14% share of the global retail colo + IX market is greater than 2x its next closest peer, if you isolate interconnection colo (the good stuff), the company’s global share is more like 60%-70%. Furthermore, according to management, half of the next six largest players in below chart are looking to divest their colocation assets, and of the remaining three, two serve a single region and one is mostly a wholesale.
Equinix points to its global footprint as a key competitive advantage, but it’s important to qualify this claim, as too many companies casually and erroneously point to their “global” presence as a moat. By being spread across multiple continents, you can leverage overhead cost somewhat, offer multi-region bundled pricing to customers, and point to your bigness and brand during the sales process. Equinix claims that around 85% of its customers reside in multiple metros and 58% in all three regions (Americas, EMEA, APAC)…but a lot of these multi-region relationships were simply manufactured through acquisition and in any case, the presence of one customer in multiple datacenters doesn’t really answer the question that really matters, which is this: does having a connectivity-rich colo in, say, New York City make it more likely that a customer will choose your colo in, say, Paris (and vice-versa) over a peer who is regionally better positioned and has a superior ecosystem? I don’t see why it would. I’m not saying that a global presence is irrelevant, just that housing the customer in one region does not make him inherently captive to you in another. A customer’s choice of datacenter will primarily be dictated by regional location, connectivity, ecosystem density, and of course, reliability and security.
Which is why I wouldn’t be so quick to conclude that Equinix, by virtue of its global girth, wields an inherent advantage over Interxion, another fine connecity-rich that gets all its revenue from Europe. Over the years, INXN has been a popular “play” among eventy types hoping for either a multiple re-rating on a potential REIT conversion or thinking that, as a $3.6bn market cap peon next to an acquisitive $36bn EQIX, the company could get bought. But the company has its fundamental, standalone charms too.
The European colos appear to have learned their lesson from being burned by overexpansion in the early 2000s, and have been careful to let demand drive high-single digit supply growth over the last decade. As tirelessly expounded in this post, replicating a carrier rich colo from scratch is a near insuperable feat, attesting to why there have been no new significant organic entrants in the pan-European IX data center market for the last 15 years and why customers are incredibly sticky even in the face of persistent price hikes. European colos are also riding the same secular tailwinds propelling the US market – low latency and high connectivity requirements by B2B cloud and content platforms – though with a ~1-2 year lag.
The combination of favorable supply/demand balance, strong barriers to entry, and a secularly growing demand drivers =
The near entirety of INXN’s growth has been organic too.
Compared to Equinix, Interxion earns somewhat lower returns on gross capital on mature data centers, low-20s vs. ~30%. I suspect that part of this could be due to the fact that Interxion does not directly benefit from high margin interconnection revenues to the same degree as Equinix. Interconnect only constitutes 8% of EQIX’s recurring revenue in EMEA vs. nearly 25% in the US. And cross-connecting in Europe has historically been free or available for a one time fee collected by the colo (although this service is transitioning towards a recurring monthly payment model, which is the status quo in the US).
[INXN has invested over €1bn in infrastructure, land, and equipment to build out the 34 fully data centers it operated at the start of 2016. Today, with 82% of 900k+ square feet utilized, these data centers generate nearly ~€370mn in revenue and ~€240mn in discretionary cash flow [gross profit less maintenance capex] to the company, a 23% annual pre-tax cash return on investment [up from mid-teens 4 years ago] that will improve further as recurring revenue accretes by high-single digits annually on price increases, capacity utilization, cross-connects, and power consumption.]
But in any case, the returns on incremental datacenter investment are certainly lofty enough to want to avoid the dividend drain that would attend REIT conversion. Why convert when you can take all your operating cash flow, add a dollop of leverage, and invest it all in projects earning 20%+ returns at scale? As management recently put it:
“…the idea of sort of being more tactical and as you described sort of let – taking some of that capital and paying a little bit of dividend, to me, that doesn’t smack of actually securing long-term, sustainable shareholder returns.”
Equinix, on the other hand, must at a minimum pay out ~half its AFFO in dividends, constraining the company’s organic capacity to reinvest, forcing it to persistently issuing debt and stock to fund growth capex and M&A. Not that EQIX’s operating model – reinvesting half its AFFO, responsibly levering up, earning ~30% incremental returns, and delevering over time – has shareholders hurting.
And there’s still a pretty long runway ahead, for both companies. Today’s retail colocation and interconnection TAM is around $23bn, split between carrier neutral colos at ~$15bn and bandwidth providers at ~$8bn, the latter growing by ~2%, the former by ~8%. Equinix’s prediction is that the 8% growth will be juiced a few points by enterprises increasingly adopting hybrid clouds, so call it 10% organic revenue growth, which would be slower than either company has registered the last 5 years. Layer in the operating leverage and we’re probably talking about low/mid-teens maintenance free cash flow growth.
At 28x AFFO/mFCFE, EQIX and INXN are not statistically cheap stocks. But it’s no so easy to find companies protected by formidable moats with credible opportunities to reinvest capital at 20%-30% returns for many years. By comparison, a deep-moater like VRSK is trading at over 30x free cash flow, growing top-line by mid/high single digits, and reinvesting nearly all its prodigious incremental cash flow in share buybacks and gems like Wood Mac and Argus that are unlikely to earn anywhere near those returns.
INXN claims to be the largest pan-European player in the market, which is technically true but also a bit misleading because in the big 4 European markets (France, Germany, Netherlands, and the UK) that constitute 65% of its business, by my estimate, Interxion still generates less than 1/3 the revenue of Equinix. Even before the Telecity acquisition in January 2016, EQIX generated more EMEA revenue than Interxion, but now it has more datacenters and across more countries in the region too [well, depending on how you define “region” as the set of countries covered in Equinix’s EMEA is more expansive than that covered by Interxion].
[VRSK – Verisk] High Quality Cash Flow, Limited Reinvestment Opportunities
SAMPLE POSTS,[VRSK] Verisk Analytics |
When evaluating the competitive advantage of a data-based analytics business, three important questions come to mind: 1) Is the data proprietary? 2) Are the insights from the data critical? and 3) Does the data fuel a product feedback loop?
Verisk began its operations in 1971 as Insurance Services Offices (ISO), a non-profit enterprise started by P&C insurers to collect industry data and information that was used by its sponsors to determine premium rates, underwrite risk, develop products, and report to regulators, basically acting as a cost center for the P&C industry. The entity expanded into analytics with its acquisitions of American Insurance Services in 1997 and the National Insurance Crime Bureau in 1998 (which brought expertise in claims fraud detection and prevention) before converting to a for-profit organization in 1999 and going public in 2009 as Verisk (ISO became wholly-owned subsidiary of Verisk). The company acquired a bunch of companies from its ISO days to today, bolstering its core insurance risk assessment services and expanding into new industry verticals.
Verisk’s decades-long legacy as the central repository of P&C industry data is the heart and soul of its competitive advantage and has served as the foundation on which all its other offerings have been built over the years. It is difficult to overstate the critical role that Verisk plays in pricing, claims management, and administrative efficiency across the P&C industry. For instance, Verisk sets the de facto industry standard on language – “court-tested” and found in 200mn of the 250mn policies issued in the US – used in policy forms sold to insurers via subscription that ensures consumers are getting the same amount of coverage for the same quote across insurers.
The company sits at the center of a network that procures data from a wide variety of sources (claims settlements, remote imagery, auto OEMs), analyzes it, and delivers predictive insights to clients (insurers, advertisers, property managers). The agreements through which a customer licenses VRSK’s data also allows the company to make use of that customer’s data….so essentially the customer pays Verisk for a solution that costs almost nothing for the company to deliver and Verisk gets to use that customer’s data to enhance its own solutions, which improved solutions reduce churn and attract even more customers (and their data) in a subsidized feedback loop. This flywheel, built on top of Verisk’s historical advantage as the repository of industry data, has generated world’s largest claims database (VRSK aggregates claims data from 95% of the P&C industry), containing granular information on 1.1bn claims (up from 700mn claims in 2011), with the insurance ecosystem submitting ~200k new claims a day across all P&C coverage lines. It would be effectively impossible for a competitor to replicate this ever-burgeoning flurry of data.
[Aside: VRSK’s spending on public clouds has dramatically escalated over the last few years, allowing the company to not only realize considerable cost savings (without public clouds, it would have been far more costly for Verisk to secure engagements with European retail banks, who operate under strict privacy laws that require data to reside within the country of origin) and flexibility as its datasets continue to expand in girth and complexity, but to also apply machine learning to those datasets and enable new functionality to its services].
Comprehensive data paired with a decent model generates better insights than limited data paired with a great model. An insurer that relies solely on in-house claims experience cannot underwrite risks with nearly the same degree of accuracy as one with access to the entire industry’s data. Consider all the vehicle ratings variables – make & model, location where the car is garaged (down to one of 220k census blocks), mileage, driver’s record, semi-autonomous / safety features in the vehicle – that VRSK accounts for in assessing loss costs on 323 ISO series cars (cars that are part of the ISO ratings series used to match premiums to type of car). Or the construction costs – from roofing material to drywall to electrical and HVAC contractor rates – monitored across 460 regions across North America and updated monthly using Verisk’s Xactimate software, which insurers use to quantify replacement costs, including labor and material costs, within 21k unit-cost line items in the event of a claim and compare computed insurance-to-value estimates to those submitted by brokers at the beginning of the underwriting process. A contractor who shows up to a damaged home after a storm can leverage the 100mn price points stored in Verisk’s database, estimate a policy claim, and then share that information with the policyholder, adjuster, and the claims department. Like Verisk’s policy forms, Xactimate, too, is an industry standard used to estimate nearly 90% of all personal property claims in the US.
Data is further leveraged to improve efficiency and the front-end experience of insureds. For instance, per one case study, a large auto insurer typically spent 15 minutes walking a policy seeker through its sales funnel (an initial 40-question quote inquiry that transitioned to processing, where 35% of qualified leads had their initial quotes changed, and finally to binding), with lead leakage at each step along the way, ultimately translating into a conversion rate of just ~7%. With Verisk’s LightSpeed, the insurer spent less than a minute on the sales process and doubled its conversions. By sifting through a deluge of 300mn transactions per month pulled from odometer readings, vehicle reports, and claims loss history, Verisk only needs a few pieces of information from the customer upfront to arrive at the right price within seconds at the point of quote.
With claims settlements absorbing 2/3 of the $600bn of premiums collected by the US P&C industry every year (not to mention the $6bn-$8bn of fraudulent auto injury claims), and the industry as a whole generating negligible underwriting margins over time, solutions that improve operating efficiency, improve sales outcomes, and accurately estimate the industry’s largest expense item, are obviously critical. Verisk’s products are tightly integrated into customers’ workflows and consumed as subscriptions (subscriptions represent ~85% of the company’s revenue).
1) Is the transformed data proprietary? Check
2) Are the insights from the data critical? Check
3) Does the data fuel a feedback loop that deepens the data moat? Check.
VRSK has a nice moat in the P&C vertical. But of course the best companies not only have dominant competitive advantages around their existing business, but also huge advantaged growth opportunities, and it is here, I’m afraid, that prospects look bleaker. Here are what management sees as its growth opportunities:
Selling existing products to new customers and introducing new products to existing ones. This is the most compelling opportunity from a probability-of-success standpoint and has motivated much of the recurring tuck-in acquisitions made by the company over the years. Over the last several years, the company has re-oriented how it approaches the customer, moving away from a siloe’d approach to product sales to integrated teams: most of the company’s sales to insurers are bundled products that improve customer stickiness while boosting revenue and sales personnel are compensated on product sales made across both reporting segments (Decision Analytics and Risk Assessment).
Repurposing existing IP and capabilities to breach new industry verticals seems reasonable in theory, but has had only so-so results in practice. For instance, in 2004 the company acquired its way into healthcare (reporting systems and analytical tools for health insurers) where it could bring its expertise to bear in a big market beleaguered by hundreds of billions of dollars of annual fraudulent claims. The company also thought that understanding the healthcare market would give it greater insight into workers comp, which constitutes 20% of the P&C market and where rising medical costs constituted a growing portion of claims. Management pushed further into the space with significant healthcare acquisitions in each of 2010, 2011, and 2012, with seemingly robust growth for several years before healthcare revenue suddenly contracted in 2015. Then, in a sudden about-face, the company put Verisk Health up for sale in late 2015, blaming a regulatory and industry structure that made it hard to acquire unique data assets. Another example. In 2005, Verisk acquired its way into the mortgage sector under the premise that personal property data collected in the P&C business could be leveraged to detect fraud in the mortgage lifecycle only to divest this business in March 2014.
Based on my experience, companies that acquire a bunch of companies only to divest them years later usually leave huge craters in shareholder value. That is not the case here. Verisk Health, sold for $820mn to Veritas Capital, realized a 12% pre-tax IRR during the company’s 12-year ownership period while the $155mn sale of Interthinx to First American implied a 15%+ annual return. While both businesses failed to hit the company’s 20% hurdle rate, they at least met a reasonable cost of capital threshold and if you were feeling charitable, you might even credit management for explicitly considering return requirements at all. The takeaway from these stories would seem to be that the company optimizes returns on capital when it reinvests in its core P&C business….which is why Verisk’s $2.8bn cash/stock acquisition in May 2015 of Wood Mackenzie, a subscription-based provider of data analytics and commercial intelligence for the hydrocarbons industry (with 99% customer retention growing ~10% organically at the time of acquisition) was a head-scratcher to me. Wood Mac is a mission critical application for anyone involved in the oil and gas industry (E&P companies, investors, banks) who needs to stay on top of the supply curve and understand the productivity of various oil projects around the globe.
According to management, there were some immediate cross-sell gets between WoodMac and Verisk Maplecroft (country risk monitoring), but the latter was a really tiny business. Apparently, the bigger, medium-term opportunity is introducing WoodMac’s data assets in the energy sector to its P&C insurance customer base, but for the time being this looks more like a standalone data business with an independent growth vector and over the last year, management has acquired several more hydrocarbon data companies to bolt onto WoodMac. After 2 quarters of growth following the acquisition, WoodMac has experienced y/y declines in each of the last 5 quarters on end market weakness and currency headwinds. So, on the surface, it looks like the company paid an 18x multiple on peak EBITDA (vs. the 9x that it has historically paid for acquisitions) for a good business, but one with questionable synergies.
And then there’s this third vertical, Financial Services, within the Data Analytics segment that provides competitive benchmarking, analytics, and measurement of multi-channel marketing campaigns for financial institutions around the world, including 28 of the top 30 credit card issuers in North America. With its consortium-fed depersonalized data sets (which management claims is the most comprehensive in the payments space with a view into millions of merchants, billions of accounts, and trillions of transactions – including PoS and online transactions – tracked daily), the company offers “enhanced marketing” and risk management solutions to clients.
While the acquisitions of Verisk Health, WoodMac, and to a lesser degree Financial Services do indeed bring proprietary datasets into the company’s fold, they more mundanely resolve the problem that all great but maturing business run into, which is what to do with all the cash flow given limited reinvestment opportunities in the core business. To be clear, these acquired verticals seem like “moaty” businesses in their own right, but synergies with the core P&C vertical are murky or non-existent, so it’s unclear to me why these businesses have much more value inside Verisk than as standalone companies….and they certainly weren’t acquired at distressed, opportunistic prices. Also, given all the big talk around ROIC metrics and the company’s recurring acquisition activity, it’s a bit irksome that management’s comp is tied to pedestrian measures like total revenue growth (at least it’s organic growth) and EBITDA margins.
Anyhow, I’ve spent too much time on this topic as the core Insurance business still constitutes 70% of revenue and even more of total profits and in any case, if history is any guide, management will dispassionately evaluate these segments against target IRRs. According to management, from 2002 to 2015, the company has earned a 19% annualized return on its acquisitions (assuming a 10x exit EBITDA multiple) and a 15% return on share repurchases.
The problem with unique data sets is that they’re, well, unique. P&C insurance is a regional market with asset types, regulatory regimes, and demographics that vary widely by country and so the robust dataset that Verisk has spent decades building in the US has little relevance overseas. As even management will admit, the international opportunity is more aptly characterized as “multi-domestic.” Furthermore, for the most part, insurance companies overseas have less sophisticated workflows compared to US insurers, and so the product bundles are slimmer and require a more consultative sales approach (i.e. they cost more to sell).
The quality of the business is adequately represented by VRSK’s profitability: 50% EBITDA margins that, depending on acquisition activity, go up a little or down a little each year, but have certainly expanded over time (from 44% in 2011 and 48% in 2012). The 70% of revenue coming from the P&C industry should basically reflect economic growth (+lsd), share gain, and cross-selling (+lsd) offset by modest contraction from customer consolidation, so call it +5%-6%. Financial services and energy, the other 30% of revenue, grows at maybe 10%-15%, blending out to something like 8%-10% growth on a consolidated basis, which maybe translates into low double-digit growth after baking in buybacks.
At 27x trailing earnings, valuation seems uninspiring relative to other high quality businesses that trade at similar valuations but have far bigger growth opportunities to boot (MA and V come to mind). There are weighty long-term risks to consider as well. For instance, mass Level 5 adoption will not only reduce the number of vehicles on the road and accidents but re-define value capture within the auto ecosystem. That’s still probably at least a decade out, but on the way, improving advanced driver assistance systems that dramatically reduce accident rates (this reduction will happen quicker than some may think because of the salutary knock-on effects that equipping one vehicle with ADAS has on others. I think we’re in this moment now where the rate distracted driving has exceeded the rate at which these systems have improved, leading to a recent uptick in vehicle accidents, but the trend will resume is southerly course in due time) may reset accident curves and impugn the relevance of historical datasets, carving horizontal inroads for competitors with more generic big data and machine learning capabilities who can create new, more relevant datasets from sensor data.
[RYAAY – Ryanair] Low Cost Flywheel
SAMPLE POSTS,[RYAAY] Ryanair |
“One thing we have looked at is maybe putting a coin slot on the toilet door…Pay-per-pee. If someone wanted to pay £5 to go to the toilet, I’d carry them myself. I would wipe their bums for a fiver.”
Michael O’Leary, CEO of Ryanair
In a letter to one of GEICO’s officers dated July 22, 1976, Warren Buffett wrote:
“I have always been attracted to the low cost operator in any business and, when you can find a combination of (i) an extremely large business, (ii) a more or less homogenous product, and (iii) a very large gap in operating costs between the low cost operator and all of the other companies in the industry, you have a really attractive investment situation. That situation prevailed twenty five years ago when I first became interested in the company, and it still prevails.”
One of the most compelling moats a company can possess is a set of self-reinforcing processes that continuously fosters lower unit costs. Interactive Brokers, for example, benefits from such a dynamic. As I’ve previously noted, IBKR can charge its customers a fraction of the commission assessed by peers and still generate significantly higher profit margins because it: 1) spends far less of its revenue on advertising; 2) does not support physical branches or an army of customer service reps, and less appreciated but critically; 3) attracts trading volume that is itself endemic to continuously driving down execution costs, since the more trades the company executes, the more optimally it can route orders to low-cost venues, and better execution in turn, leads to more trading volume.
Ryanair benefits from a similar low-cost flywheel.
This story really begins with Herb Kelleher – the founder of Southwest Airlines, the company that Ryanair modeled itself after – who observed that the hub-and-spoke networks operated by legacy carriers, designed to maximize load factors, sub-optimally left aircraft stranded on the tarmac waiting for feeder traffic and baggage transfers. Herb understood that to generate healthy profits along short point-to-point routes, he had to keep his planes off the ground and in the air for as long as possible while assiduously controlling costs, which informed an operating framework designed to hasten turnaround times: single-class, unassigned seating to expedite onboarding; a no-meals policy to obviate time-consuming clean-up; a single aircraft model (Boeing 737) to reduce crew training costs and enable speedier repairs and servicing; and at least at the start, concentrating on uncongested, secondary airports to enable rapid take-off and landing.
Michael O’Leary, profanity-oozing ass-kicker and Ryanair CEO since 1994, left Kelleher’s charm and decency on the Love Field tarmac but imported his operating model to Europe, stoking a relentless self-reinforcing moat entrenchment process that continues to this day. Early in its corporate life, by targeting secondary airports desperate for traffic – Hahn, not Frankfurt; Brescia, not Verona; Lubeck, not Hamburg; Skavsta, not Stockholm – Ryanair obtained substantial landing fee discounts. Stansted, for instance, agreed to charge Ryanair £1 per passenger vs. the official rate of £6 while Essex airport offered heavily discounted fees on new routes, laddering up to higher tariffs over 4-5 years as those routes matured and densified. Ryanair recycled the cost savings into lower passenger fares, attracting fresh waves of traffic that were used to negotiate favorable landing fees at other secondary airports and receive discounts on aircraft orders from Boeing.
[When reading coherent business triumph narratives involving bold actors and crafty strategy, it’s easy to neglect the crucial role of luck. Just to swiftly dispel the notion that Ryanair’s status as the largest and most profitable airline banner in Europe was inevitable, know that the company was on the brink of collapse in the late ’80s before Ireland’s persuasive Minister of Transport somehow convinced the Cabinet to break up Aer Lingus’ monopoly, yielding critical, life-saving routes to Ryanair. At the time, O’Leary, who was handling finances for the troubled airline, actually recommended to Tony Ryan (the airline’s founder) that the whole cash-draining enterprise be shut down before striking what turned out to be an insanely profitable compensation package for himself, one which granted O’Leary a quarter of any profits above £2mn, a goal Ryan believed outside the realm of possible at the time (this deal has since been scrapped). The Aer Lingus break-up was then followed by EU’s 1992 Open Skies treaty, which deregulated the European airline industry and allowed carriers to fly passengers between EU states. I found this story and other interesting historical tidbits referenced in this post in the book Ryanair: The Full Story of the Controversial Low-Cost Airline written by Siobhan Creaton]
Complementing this feedback loop, a keen obsession with cost control and efficiency has taken root in policies and behaviors ranging from cringeworthy (charging the disabled for wheelchairs) to heroic (O’Leary heaving baggage onto planes during strikes) to downright petty (apparently and perhaps apocryphally, at one time Ryanair banned employees from charging their mobile phones during work hours, citing theft of company electricity amounting to 1.4 pence per charge), reinforcing an unrepentantly utilitarian attitude toward customer service: humane treatment for one compromises low costs for all.
This has all crescendo’ed to a cost structure today that no European competitor is even remotely positioned to rival. Ryanair’s cost per passenger (excluding fuel) is just €27 vs. €40 for Wizz Air, the second lowest-cost airline. Culturally stodgy full-service European incumbents like IAG, Air France, Lufthansa, and Air Berlin have average ex. fuel per passenger costs that run 4x higher than Ryanair’s. Besides maybe Wizz Air, a low-cost carrier focused on Eastern European routes, no competitor can match the company’s €42 airfare and still make money. This cost advantage will only widen as the company inks still more incentive deals with airports and takes delivery of Boeing 737 MAX aircraft, which come with 4% more seats and a 16% reduction in fuel costs per passenger.
And so, because engaging in a fare war with Ryanair is suicidal – as the failed low-cost initiatives of major incumbents like Virgin Express, BA Go, and KLM Buzz attest – Ryanair can profitably undercut competitors and steal their passengers, maximizing load factors while leveraging market share gains to secure increasingly advantaged landing fees and aircraft prices, with the capacity to incessantly reinvest the resulting savings into still lower passenger fares. Over the last dozen years, this self-perpetuating process has spurred 14% annual growth in passenger volume, amplifying scale advantages that have allowed Ryanair to cost-effectively (EBIT/passenger has remained flat over this time) extend its reach beyond secondary airports. Unable to compete with Ryanair’s prices, competitors have increasingly relinquished bases in Germany, Italy, Spain, and Belgium, compelling primary airports, which today represent just over half of all airports served by the company, to negotiate attractive volume deals with Ryanair.
[On public conference calls, O’Leary will frequently and explicitly highlight its cost advantage over peers, often goading competitors by name. The confrontational posture is more than just an unvarnished reflection of O’Leary’s gracious personality; it signals to competitors that Ryanair stands credibly ready to take fares down to levels that would still allow Ryanair to generate profits while producing significant losses to them, i.e. “don’t even bother competing with us on price” (my words)].
Sometime in the late-90s, Ryanair placed an £800mn order for 25 planes with Boeing with the option to purchase 20 more for £650mn, a huge commitment for what was then a relatively unknown fledgling. To test the company’s creditworthiness, Boeing rigorously stress tested the airline’s business model through computer-simulated declines in passenger traffic, fluctuating fuel costs, and exchange rates. The result: Boeing could not find a single 3-month period in which Ryanair would not be profitable.
Boeing’s Director of Sales in the UK and Ireland remarked,
“The lowest we could do was break even….It is probably the most robust model we have encountered.”
This assessment would prove mostly prescient as Ryanair subsequently delivered positive operating profits each fiscal year up to today (“mostly,” because there were losses in some 3-month periods), generating among the highest returns on capital (averaging low-teens over the last 15 years) of all European airlines. Under O’Leary’s guidance, management has acted as capable stewards of capital, opportunistically retiring 15% of the company’s share count over the last 5 years at attractive prices – with nearly 30% of that reduction taking place during the Brexit vote, when the company increased its share repurchase authorization to seize on the stock’s ~25% decline – all while maintaining a pristine balance sheet, which carries less than €600mn in net debt against €2bn in LTM EBITDA. The stock trades at 16x trailing earnings with a long runway for growth as passenger volumes, per management’s guidance, expand by ~9%/year (about 2x the industry) from 119mn in FY17 to 200mn by FY24, and assuming flat fares, earnings should grow meaningfully faster than that on lower costs per passenger (as the more efficient MAX comes on line) and higher per-passenger ancillary revenue.
Since Ryanair announced that membership in myRyanair for all online bookings would be mandatory last November, membership has surged and is expected to reach 20mn by March 2017. Besides the immediately obvious revenue and cost opportunities from upselling reserved and upgraded seats (which has prompted management to raise medium-term guidance on ancillary sales) and disintermediating costly OTA and metasearch traffic, there are significant advantages from directly interfacing with a huge customer base, like fostering loyalty through customized services and even, just maybe, scaling an in-house OTA, linking travelers to car rentals and hotel rooms. Over the last decade, passenger fares haven’t really budged much at all; however, ancillary revenue per passenger has nearly doubled, from ~€8 to ~€15 per passenger, driving all of the per-passenger EBITDA growth over that period, and now that Ryanair has made myRyanair membership mandatory, its burgeoning captive audience should translate into still greater ancillary sales/passenger.
Brexit has prompted Ryanair to pivot away from the UK and concentrate its growth ambitions in continental Europe. The UK represents about 2% of the company’s capacity and 3 out of its 1,800 routes, so it seems like a manageable risk, though who can fully handicap the destabilizing consequences of creeping populist/isolationist sentiment? It’s a risk. Still, pick a year, any year and you’ll find that there has almost always been a sound macro, political, or industry-specific reason not to invest in Ryanair stock: ATC strikes, terrorism, austerity measures, economic contraction, fuel shocks, low-cost competition from incumbents, low-cost competition from upstarts, foot and mouth disease, the Iraq War, Avian flu, Volcanic ash clouds. Just as GEICO’s structural cost advantage remained intact despite the company’s reckless underwriting practices during the ‘70s, so has Ryanair’s persisted through these destabilizing exogenous events. And besides, through it all, it turns out that for the right price folks still want to explore different cultures, get away during holidays, and visit loved ones in distant locations. I suspect this will continue to be true over the next decade.
So if you’re a shareholder, the next time you find yourself on a Ryanair flight, as you recline comfortably squirm perpendicularly in your squeaky, navy blue seat, carapaced by overhead compartment doors littered with tacky revenue-generating ads, feel free to silently cheer through your discomfort.
[BRO – Brown & Brown] Compounder in a Fragmented Sector
“He wasn’t just saving his own soul when he donned his coat and hat after dinner and went out again to resume his work – no, it was also to save some poor son of a bitch on the brink of letting his insurance policy lapse, and thus endangering his family’s security ‘in the event of a rainy day.'” – Alexander Portnoy (in Philip Roth’s Portnoy’s Complaint)
Investing in a commercial insurance broker today is like being grounded on a summer weekend – you’re totally missing out on the spin-off/drop-down/Silicon Valley pool party. If Brown & Brown (NYSE:BRO) were there, it’d be the pallid in ochre tweed dawdling on the fringes, friendly but forgettable. But it’s got these fangs, man, you don’t even know.
Brown & Brown is a responsible consolidator in a fragmented sector with long-term compounding attributes and a strong balance sheet. Its dependable cash flow is stewarded by an opportunistic management team and Board which own 18% of outstanding shares. A value-heaping drudge free of catalysts, Brown offers a post-tax cash earnings yield of 7% growing by an expected 9-10% annually with low downside risk – generous, given the durability of its competitive advantages and the compensation offered by current riskless alternatives.
The Street’s myopic coverage is riveted to the following immediate challenges whose repercussions on long-term value, if considered at all, are embellished: 1) persistent coastal property insurance rate declines, 2) heightened private broker valuations that limit accretive capital deployment and 3) weakness in small group benefits. But housed in a moated cultural architecture – one that has conservatively and steadily created value over generations – are rejiggered incentives promoting organic growth and meaningful recent capital allocation initiatives that signal management’s belated effort to use its under-employed balance sheet.
Insurance Brokerage Industry Overview
A commercial insurance broker is an intermediary that matches fragmented buyers and sellers of property/casualty/health insurance – it works on behalf of businesses to find optimal coverage at favorable prices while acting as a distribution channel for insurers who ultimately carry the policies on their balance sheets.
The broker collects premium payments from customers, extracts a ~10-15% commission for itself and remits the remainder to one or more underwriters.
US commercial insurance brokerage is highly fragmented outside the top 10. The 100 largest brokers generated an estimated $32bn in US revenue in 2014, and of that, the top 10 accounted for 72%; three global risk managers – Aon, Willis, and Marsh (NYSE:MMC) (pro forma for Towers Watson) – represented 43%. Below the top 100 are an estimated 35k+ sub-scale agencies and brokerages… quaint, often family-run operations generating <$3mn in revenue. Of that, I estimate there are around 15k independent commercial agencies generating about $15bn in non-personal lines commissions, of which ~5k agencies aggregating $10-11bn of commercial revenue fall within Brown’s acquisition purview.
The average age of a small agency proprietor is late-50s and 18% of agencies have 20% owner/principals older than 65 years, up from 10% in 2012. My college-aged cousin tells me mid-market insurance brokerage is no longer a cool career choice, so succession issues may increase the supply of motivated sellers in the coming years. Between $300mn+ in annual post-dividend discretionary cash flow and another $500-600mn from additional leverage – together nearly 20% of market cap – BRO has significant excess capital to deploy in a vast, fragmented landscape (more on this later).
While the Company occasionally trespasses into AON/Willis/Marsh territory, Brown mostly competes with the unwashed thousands for commercial middle-market customers, broadly defined as businesses paying $25k to $3mn in annual premiums (around $2.5k to $300k in commissions to Brown). Brown’s average annual commission per account is approximately $12,500, well beneath the ~$100k minimum interest threshold of the big three. Instances where Brown does service a Fortune 1000 account are typically relegated to niche pockets ignored by larger peers.
Company Overview And Competitive Positioning
As a capital-light/labor-intensive business, BRO pays around half its revenue in salaries and commissions to employees who manage customer relationships and secure new business. The Company is well diversified across industries and does not meaningfully rely on a single client or carrier. Renewal rates remain at their historically stable low-to-mid 90%, yielding a reliable source of recurring cash flow. Effectively all of Brown’s revenue is dollar-denominated and comes from U.S. customers. Over the past dozen years, about 2/3 of EBITDA and 1/4 of revenue has trickled down to free cash flow to the firm, reflecting persistently higher-than-peer profitability.
One reason for this is Brown’s market focus. Sandwiched between fee-heavy national accounts and pure-commission small commercial ones, lies the commission-heavy middle market, Brown’s domain. As average account sizes grow, so do their bespoke risk attributes, and the brokers who service them assume increasingly lower margin but higher dollar-profit fee-based consultative roles.
The big three brokers have acquired major consulting businesses over the years and now derive at least half their revenue from fees. Fee arrangements make less sense for smaller accounts; as the owner of one mid-sized benefits broker explained to me, the opportunity cost of the 40 hours he might typically spend winning a new $40k commission account is the $5k from a $125/hr consulting engagement and indeed, across publicly-traded insurance intermediaries, the average EBIT margins in commission-based brokerage segments significantly exceed those of fee-oriented consulting ones (21% vs. 15%).
The second explanatory factor is squishier but more important: here’s a heartwarming yarn from the current executive chairman/former CEO – in which he reminisces client visits he’d make with his father – that aptly frames Brown’s relentless sales culture.
“I remember thinking as a kid that a lot of time these were two-hour conversations; and of the two-hour conversation, the first hour and 45 minutes would be about hunting, fishing, politics, the family, the children, and the last 15 minutes would be about insurance and then we left. I thought to myself, ‘I could be a lot more efficient by cutting out most of that other stuff and just talk about insurance.'”
Brown’s stalking cheetah mascot emblematizes the lean and aggressive sales ethos of an organization that rewards performing producers with generous stock grants, and where even senior executives are expected to sell. No joke, here is a line from the 2004 annual report: “Before being assured of a meal, the cheetah must finish the kill: powerful jaws clamp solidly around the throat of its prey, a tenacious grip it must hold for several minutes until the struggle ends.” As for the goosebump-inducing “money-making business” mantra that underscores sheeny photos of smugly confident, arched-back/cross-armed producers in old annual reports…yeesh.
The product of hundreds of acquisitions made throughout its corporate history, Brown’s operating model is decentralized with minimal corporate overhead, in which 190 profit centers operate autonomously and adjust deftly to local market conditions without the interference of centralized, bureaucratic mandates.
Profit centers are responsible for their own P/Ls, their financial results pitted against those of others monthly; bonuses are dispensed/dispensed with according to profitability improvements/attrition. The Company is assiduously cost-conscious and fixates on meeting budget targets, with one senior producer at Brown grudgingly touting its “bare-bones” constitution.
Heavy ownership incents organizational buy-in to cash flow maximization; in addition to the 18% of the Company owned by management and the Board, an estimated 70% of rank-and-file employees collectively own ~10%+ of shares outstanding, and nearly 8% of employee retirement plan assets are parked in Brown’s stock.
The cringe-worthy, carnally aggressive jingles; the indoctrinating pedagogy of Brown & Brown University; management’s adamant conceit of diligently prioritizing personality types over expertise; a generous program that allows employees to purchase equity at a 15% discount to market (vs. 5% at Gallagher) and rewards “winners” with stock – all merge to a weird and competitive but cohesive compound that has yielded peer-trouncing productivity.
Brown’s local market density, replicated across a national footprint, affords competitive advantages that are difficult to replicate. Scale intermediaries add value to the insurance ecosystem by enhancing liquidity in risk transfer markets, lubricating frictions arising from information asymmetries.
A broker with reach can: 1) scan pricing and risk appetite across its carrier relationship, reducing search costs for mid-sized businesses looking to place complex risks and 2) aggregate risks across industries, lines and/or exposure layers to negotiate favorable pricing and terms with carriers that a single enterprise, on its own, cannot.
For carriers with limited visibility into a highly fragmented SME market, the opportunity cost of scaling direct marketing or captive agencies is prohibitive; it’s more efficient to interface with independent brokers/agents who have embedded client rosters in local markets. Most insurers rely on a small proportion of brokers for the majority of their premiums and are loath to compromise these critical relationships by adopting disintermediating alternatives. A two-way feedback loop – propelled by clients seeking brokers with carrier access and carriers favoring brokers with heavy client flow – informs the benefits of scale that make it difficult for smaller agencies to effectively compete at comparable economics.
The big three brokers have woven their corporate identities around large accounts. Targeting nearly 600k U.S. middle-market businesses requires local market knowledge that comes with widely distributed, nook-and-cranny coverage. Even if the global brokers wanted to aggressively pursue small and mid-sized books, the industry’s 90%+ retention rates suggest that organically dislodging incumbent relationships would be difficult. Furthermore, the $10bn roll-up opportunity is dispersed among tens of thousands of <$1mn revenue brokers, placing centralized entities that monarchically deploy capital from Manhattan, at a disadvantage to regionally organized scale players who have cultivated long-standing, personal relationships with local businesses and even competing agencies, and are better positioned to assess fold-in opportunities.
Perhaps more relevant than intra-industry threats are challenges lurking from without.
Disintermediation concerns from online platforms have plagued the brokerage industry since Gore funded ARPANET. Technology continues to snuff out the primordial inefficiencies of human agency in all sorts of industries, draining moats that have traditionally relied on high search costs (Travelocity/Expedia -> travel agencies) or complexity (Turbotax/LegalZoom/Betterment -> income tax preparation/legal filing/personal wealth management)…and certain insurance lines, those with easily stratifiable and granular risk profiles, are no different.
Commercial insurance for mid-sized businesses, however, is different. Brown is more than a dumb distribution pipe for insurance carriers; its typical client has heterogeneous risk management needs and confronts unique and opaque price to coverage sensitivities. Unlike standardized personal lines documentation, many commercial policy forms differ significantly by carrier, bespoke “manuscript” policies are often drafted and policy language can be unregulated. Given larger exposures, business insurance policies are chunkier for a carrier than, say, personal auto and pricing is tailored to reflect not only the insureds unique circumstance, but also a given carrier’s appetite for certain risk pools. According to a 2013 Boston Consulting Group survey, the majority of polled SMEs in the US indicated that they were unwilling to purchase commercial insurance without the assistance of a broker mostly because of policy complexity. Personal lines pricing, on the other hand, is standardized and regulated, with state insurance regulators heavily scrutinizing consumer rate filings (people vote, businesses don’t).
Finally, with the cover-your-ass gravitas that accompanies responsibility for a dozens/hundreds of livelihoods, established businesses tend to be less price sensitive than consumers when it comes to purchasing insurance, prioritizing tailored coverage, post-sale service and claims coordination. One commercial broker explained to me that while a two-person start-up might behave like an indigent teenager, penny-pinching on insurance they’re brazenly sure they won’t need, employers at certain thresholds of commercial viability strongly prefer flesh-and-blood guidance.
Although commercial insurance as a whole is big business, its constituent parts are mostly niche. The companies behind heavily advertised brands address huge consumer markets where ad dollars scale, like private passenger auto. Try going a day without having Progressive’s Flo or GEICO’s Gecko quirkily nudging you to solicit a quote. The deluge of price-focused marketing in personal auto has bolstered carrier brands and shifted premium market share to the direct channel. In aggregate, commercial exposures generate roughly the same amount of premiums as personal ones, but are split among more than two dozen lines compared to just two – homeowners and auto – on the personal side. At $189bn, private passenger auto is nearly 3.5x the size of the largest commercial coverage type. Although ~7% of Brown’s revenue (my estimate) is technically labeled “personal lines,” the target insured isn’t an asset-poor schlub like me nickel-and-diming his way to coverage on a used diamante, but rather a high net-worth monocled individual seeking life insurance for his prized horses, and whose coverage needs resemble those of a small enterprise.
My point is that large insurance markets populated with bite-sized risks are more susceptible to channel shifts; the further southwest risk exposures are on the market size and granularity plane, the harder or less profitable they are to disintermediate.
Small group benefits has recently been semi-legislated into one of those law-of-large-numbers markets and this 5.5% revenue pocket for BRO has experienced organic revenue declines as small business clients have offloaded employees onto public health exchanges. Democratized health insurance has ensured greater covered lives volume for carriers and correspondingly, bigger commission pools for placing brokers. Guaranteed enrollment means that a broker need not expend resources culling through lives that ultimately don’t qualify for coverage; the promise of more assured commissions has attracted intensifying competition, notably from cloud-based intermediaries – one broker commented that small group commission rates of mid-teens from just a few years ago have compressed to 5%-6%. Paychex, after years of 7-9% insurance services client growth from 2009-2012, is retreating from small group benefits after growth flattened in recent years.
Some brokers believe (hope?) that higher-than-expected loss ratios in public pools -> more onerous future pricing/reduced carrier participation -> small employers re-integrating employee healthcare provision or, at the very least, soliciting broker advice amidst the chaos. We’ll see. I generally think that small group benefits lacks value-added differentiation relative to commercial insurance and is becoming more intensely competitive.
The Brown family’s influence has seeped unimpeded through several generations into present day Board and management, as well as Southern U.S. politics and business.
J. Hyatt Brown, chairman since 1994 and 14.9% owner of the Company, took over the business from his father who founded the Company and was Brown’s CEO from 1993 through 2009. His son, J. Powell Brown, current CEO, has been at BRO in various roles since 1995; Powell’s brother, Barrett, is a Sr. VP. You get the idea.
Half the Board’s directors have been so since the ’90s and are septua/octo-genarians that look more at ease in a senior citizens community; over half the Board from 2000 still remains intact. Meanwhile, 7/10ths of the management team – with an average age of 55, a comparatively younger bunch – have been with Brown since the ’80s/’90s and have served as profit center leaders at some point in their careers. The freshest executive, CFO Andrew Watts, joined the Company in early 2014, replacing a retiring CFO who had been with the Company since 1992. Vesting periods for stock incentive grants of seven years – a reprieve from the 10 to 15 year vesting period prior to 2013 – speak to the Company’s long-term orientation.
But longevity is hardly an unalloyed positive. Lengthy corporate histories entwined with family lineages often come with mandatory disquieting governance anxieties, and Brown is unfortunately no exception. There are, count ’em, 12 Board members whose CVs read like a “Who’s Who” of Florida/Georgia politics and business. Overlapping Board memberships and various (but mild) value-leaking related-party transactions are recurring decade-long themes. I don’t love this. That said, the Brown clan has the vast majority of its wealth embedded in its 15%+ ownership of a Company that has donned its family name for nearly 80 years; it’s difficult to conceive of a more personal and economic motive for optimizing long-term value. So while management and the Board are insularity personified, on the whole, I take comfort in the same impermeable cultural bubble that has conservatively accreted value over the decades.
Continued expansion of shareholder value will depend significantly on continuing NPV+ acquisitions and share buybacks. Because industry retention rates are so high (90%-95%), meaningful organic market share shifts aren’t common and acquisitions are an oft sensible growth path.
For context, from 2004 through September 2015, Brown acquired $1.3bn in annualized revenue (nearly 3/4 of current consolidated run-rate of $1.7bn) using a combination of cash and earn-outs and always staying well inside responsible leverage thresholds, perhaps to a fault. The Company mostly purchases assets with tax-deduction attributes and since at least 1997, BROhas not taken a single intangible asset impairment, a conceit that no other public peer can claim. Strategically, Brown has established local market density by acquiring regional platform “hubs” and folding in smaller, adjacent agencies and true to its decentralized form, rather than manage the process from HQ, Brown delegates the search for potential targets to local office leaders and regional executives before engaging its internal M&A team.
The Company’s profit obsession guides its takeout approach – the CFO explained to me that acquisitions are charged an explicit, de-escalating cost of capital and that fanciful but fleeting pro-formas to hit earn-outs are disallowed. Prerequisiting every deal are joint agreements specifying sustainable post-acquisition profitability targets that ramp to profit center margins within 2-3 years, a goal that has been credibly demonstrated over time.
Historically, management has been opportunistic, aggressively capitalizing on attractive multiples in 2008 before recently orienting its attention to buybacks as unprecedented globs of private equity capital have flooded the agency landscape, displacing valuations up to and somewhat above 10x. By comparison, in 2008, the Company used all of its operating cash flow to aggressively acquire small retail agencies at 5.5-6.5x EBITDA when its own stock was trading at ~8x.
Brown’s retail fold-ins typically see meaningful revenue and cost synergies through cross-selling opportunities, carrier leverage and overhead reduction. Its platform affords acquired brokers, who have typically managed just one or two exposures for clients, access to an array of risk packages through expanded carrier relationships, in addition to back-office support that allows a more concentrated effort on sales and relationship management.
At 1.3x net debt to EBITDA, BRO is deeply uncool in today’s levered platform parade. Recent financial engineering feats that maximize balance sheet efficiency at the expense of redundancy that veil tricky but paramount assessments of moat depth behind precise theoretical levered returns are just the pits, really, travesties in risk management. But Brown is not that.
Given the Company’s recurring revenue base, stable cash flows and variable cost structure, it could boost leverage to the high-end of its stated 1.5-2.5x comfort band to effect accretive acquisitions and buybacks without much incremental risk to the equity: moving to 2.5x (today) means another $500mn or so in debt, bringing total interest expense to $56mn against LTM EBITDA of ~$550mn (nearly 10x coverage).
Assuming 10% organic revenue declines – worse than any decline experienced during 2008/2009 – at far-too-onerous 60% decrementals, still leaves us with $450mn in EBITDA (and ~8x coverage) with sufficient discretionary free cash flow to delever back to 2.5x within a year.
Still, with acquisition multiples drifting towards 10x+, the foregone opportunity of repurchasing BRO’s stock at 9.5x gets costlier and management has redirected its capital allocation focus. Until recently, the company hadn’t repurchased shares for treasury since at least the ’90s; it never made sense because its stock has almost always traded at a hefty premium to smaller brokers – until now.
Over the last 1.5 years, the Company has repurchased ~4% of its shares while raising its remaining authorization to $450mn (10% of shares outstanding), by far the largest in company history. On November 11, the Company announced another accelerated repurchase program in the amount of $75mn (another 1.7% of market cap).
Going forward, I expect a mix of buybacks and acquisitions, with the lower-multiple option exerting stronger pull on capital as the valuation disparity widens. I sketch the per-share blessings of levered buybacks and acquisitions in the “Scenarios” section.
Broker commissions are a function of written premiums, which in turn are the product of: 1) premium rates (price of insurance) and 2) insurance exposure units (the amount of insured stuff – sales, payrolls, vehicles, inventories, properties, etc). If you look at a chart of mapping the growth rate of commercial lines net written premium growth since the ’70s, you’ll see that the great preponderance of data points fall north of “0,” with mid-single digit rates punctuated by recrudescent hard markets that drive net premiums 20%+ higher for a year or two. Insurance is a market where stable demand (exposure units) intersects with touch-and-go supply (pricing). You wouldn’t know it from this graph, but eight out of the last 14 years and 21 out of the last 30 were soft markets. Commercial prices today are about where they were in 2000, unadjusted for inflation.
The brokerage industry’s incessant fuss over rates disguises a more mundane but convincing exogenous arbiter of brokerage industry growth: general economic activity. Brown’s management believes that historically 2/3 to 3/4 of its organic growth has been driven by exposure unit changes as opposed to pricing. The chart below generally validates this claim but with an important caveat: the influence of commercial insurance pricing on Brown’s organic growth is most pronounced during periods of sudden and dramatic rate changes (>10%). In flat to soft-ish pricing environment (-7% to +7%) like the present, insured clients mostly stick with their carriers and brokers, and organic growth hugs the green line more tightly. In any case, industry growth never quite matches rate changes as movements along the demand curve attenuate full pass-through (corporate risk managers typically stay within budget bands, reducing risk units or raising deductibles as prices elevate and vice versa).
But whatever, while BRO’s organic growth has certainly exceeded commercial insurance pricing over time, it has lagged peers. Here are the main culprits:
1) Brown is struggling in small group benefits (5.5% of revenue). The public exchange disintermediation that I discussed above was recently exacerbated by regulation in the State of Washington that resulted in the termination of several health association plans, impacting retail segment organic growth/EBITDA margins by at least 60bps/30 bps this year. Based on my conversation with the CFO, this was an isolated termination that will be anniversaried in 2016. As previously mentioned, I am expecting unabated small group benefits challenges; still, even eviscerating all $90mn in small group business at onerous contribution margins would only dent LTM cash earnings by around 8-9%.
2) Relative to peers, BRO is over-indexed to coastal property, an air-pocket of 15-25% rate declines within an otherwise stable P/C complex. I estimate that roughly 8-9% of the Company’s core commissions are linked to coastal property, a ~1% headwind to organic growth. The absence of storm activity has also hurt Brown’s third-party claims administration and Wright Flood Insurance businesses, which have operated at depleted profitability post Superstorm Sandy.
While these organic growth detractors have featured prominently in sell-side reports, their combined impact on earnings power (2-4% of LTM earnings) seems rather trivial when explicitly quantified and mundane when placed in context of assorted challenges the Company has periodically encountered over the last 15 years. Nonetheless, the Company has recently taken action to combat these headwinds.
The Board recently re-engineered significant annual cash incentives that binds management compensation to newly introduced organic revenue growth targets, a refreshing pivot from past freebees. Prior to 2015, annual cash incentives were only ostensibly linked to earnings growth – executives were awarded 100% of a “target cash incentive amount” even with flat income growth, with a maximum payout of 115% and a reasonable minimum of 90%. This year, the payout percentages range from 0% to 200%, so there’s far more risk and reward to missing and meeting goals. The dollar payouts are meaningful, with the portion tied to organic growth (40% weight) alone amounting to nearly 100% of the CEO’s base salary.
This change in compensation policy preceded a recent organizational realignment in which the retail brokerage segment was decentralized into six regions to sharpen product focus and more closely align incentives with accelerated organic growth and profitability by region. For example, in addition to overseeing an assigned territory, each regional leader will be responsible for developing strategy around a key initiative – one regional leader will focus on small business/personal lines, another employee benefits and so on.
Finally, for what it’s worth, leading indicators show accelerating economic activity in Brown’s most important markets.
Note: You may find it helpful to peruse Appendix: Company And Segment Overview prior to reading this.
What follows are my estimates of where the Company’s stock might trade two years out in various states of the world, an entirely different exercise than handicapping BRO’s fundamental worth. Marrying disciplined capital allocators with an annuity-like asset can create value counter-cyclically (assuming Brown’s moat remains intact); in what you might call one of them good problems, during deep cyclical downturns precipitating multiple contraction, BRO can aggressively repurchase shares and roll-up distressed competitors. In ebullient times, management can reinvest in the business and fortify its balance sheet in anticipation of the inevitable downturn. Cycles come and go, competitive advantages and resource allocation are more enduring; and for a business constantly inundated with cash, moats + management are far more pertinent to long-term value creation. In that spirit, the real downside case ties itself to the following categories:
1) Misallocation of capital: CEO J. Powell Brown has been with the Company since his late 20s. Debuting as a lowly account executive and plowing through roles of intensifying responsibility, he at least appears to have been denied the most palpable blessings of nepotism. While management has made several chunky acquisitions over the last 3-4 years, they’ve been thematically consistent with adjacent platform assimilations that have nudged the Company into services, programs and wholesale lines over the decades and seem synergistically sensible. So far, the large acquisitions in recent years – Arrowhead (2012), Beecher (2013) and Wright (2014) – have generated unlevered returns in excess of capital costs, been integrated at ~Company-average margins, and have led organic growth.
2) Disintermediation creep: In Innovator’s Dilemma, Clayton Christensen cogently argues that disruption can occur when incumbent solutions overshoot a market’s threshold needs just as previously inadequate alternatives meet them at lower price points. Organizational constructs that limited pursuit of lower-tier, unprofitable revenue while scaffolding success in larger markets, now render the incumbent vulnerable to viable competition from the heretofore less sophisticated bottom dwellers.
The scale advantages of two-way markets, in particular, can be persistently stable until they’re suddenly unglued and it’s conceivable that competitive forays in granular, homogenous risk pools firm a base for higher-order disruptions. One potential narrative is that the Brown’s commission-loaded market-marking function is piecemeal usurped, forcing it to increasingly rely upon lower-margin consulting fees in a domino fall that looks something like: small group benefits (where it’s already happening)-> worker’s comp -> large group benefits… or personal auto -> commercial auto -> commercial property… Besides the natural forces of competition, legislatively mandated single-payer universal health care is a fat disintermediating tail and would directly jeopardize 15% of the Company’s revenue (and perhaps 20%+ of earnings power).
Technology has nay-said the Company and its industry since the first tech boom and at least up to the present, Brown has consistently demonstrated that human relationships and insurance-specific expertise remain highly relevant. Paychex and ADP, firmly embedded in outsourced automated HR/payroll functions for SMEs and presumably best positioned to offer adjacent services, have had surprisingly little success brokering insurance products. Broker channel checks suggest that Zenefits is witnessing enormous churn and a recent Wall Street Journal article claims that the company has missed its $100mn revenue target by more than half while Fidelity has marked down the value of its investment by 48% between August 1 and September 30.
My base case ties organic growth rates to current coincident and leading economic growth indicators in BRO’s most important states and assumes that coastal property rates continue to decline 15-20% while P/C pricing, in aggregate, remains range-bound. Coalescing management commentary from conference calls and annual reports, I’ve discerned exposure lines where possible. In aggregate, I’m expecting <3% organic growth over the next several years, translating into flat EBITDA margins. Given management’s and broker industry commentary on frothy M&A conditions, coupled with a doubling of the Company’s share repurchase authorization, I assume BRO reduces its M&A appetite from historical levels and pays dearer prices of around 10-11x EBITDA while dedicating an increasing amount of cash flow to retiring shares. Net leverage is modestly increased to 1.8x.
LTM 2017E EBITDA: $603mn
LTM 2017E cash EPS: $2.79
Exit cash ROE: 12.3%
Sept. 2017E target valuation (using average valuation multiples during periods of comparable organic growth over the last 10 years):
Implied stock price @ 15x cash earnings: $41.79
Implied stock price @ 9.5x EBITDA: $36.39
The economy stutter steps into a 2008/2009 style recession amidst unmitigated softness in commercial rates. The Company’s cost cuts do not quite keep up with its falling top line and margins contract. Smaller brokerages reset their lofty valuation expectations, empowering BRO to opportunistically consolidate at somewhat more favorable multiples of 9x (still much higher than the ~6x seen during the last recession), somewhat buttressing organic profit declines.
LTM 2017E EBITDA: $499mn
LTM 2017E cash EPS: $2.27
Exit cash ROE: 10%
Sept. 2017E target valuation (using average valuation multiples during periods of comparable organic growth over the last 10 years):
Implied stock price @ 11x cash earnings: $24.93
Implied stock price @ 7.5x EBITDA: $20.31
Turbulent weather events interrupt a decade+ of depressed loss ratios, policyholder surplus is scarred, and coastal property rates reverse their downward trend, bolstering core brokerage commissions and TPA revenue while denting high-margin profit-based commissions. Better pricing, together with exposure growth, fuels organic growth to ~6.5%, driving modest EBITDA expansion. BRO levers its balance sheet up to 2.4x EBITDA to acquire companies at 10x-11x and repurchase 18% of its shares.
LTM 2017E EBITDA: $674mn
LTM 2017E cash EPS: $3.26
Exit cash ROE: 18%
Sept. 2017E target valuation (using average valuation multiples during periods of comparable organic growth over the last 10 years):
Implied stock price @ 17x cash earnings: $55.41
Implied stock price @ 10.5x EBITDA: $46.85
Company And Segment Overview
Retail (51% of LTM revenue; 48% of LTM EBITDA): see “Insurance Brokerage Industry Overview” segment. While the Company primarily targets small to mid-sized businesses, it has bolstered its large account presence, with a particular emphasis on group benefits. Between the acquisitions of Beecher Carlson (July 2013; $364mn), Pacific Resources Benefits Advisors (May 2014; $99mn; double-digit revenue growth over the last five years and 98% retention), and Strategic Benefit Advisors (June 2015; $64mn), I estimate that large accounts constitute nearly 7% of total BRO revenue (14% segment revenue). Management emphasized to me that within large accounts, Brown does not directly compete against AON/MMC/WSH, but instead surgically focuses on unique, commission-abundant placement engagements.
Significantly, these acquisitions bring solutions that bolster Brown’s existing middle-market capabilities. For example, Beecher’s strong cyberinsurance presence has been successfully adapted and cross-sold and its proprietary databases and predictive analytics are offered as preventative maintenance solutions to reduce future claims. Pacific Resources, meanwhile, has introduced fuller ancillary offerings to Brown’s upper-middle-market clients. Approximately 84% and 16% of segment revenue is commission and fee based, respectively.
The next two segments, National Programs and Wholesale, function as market access providers for third-party retail brokers looking to place risks on behalf of clients. Retail brokers, particularly newer and smaller ones that don’t have the minimum volume in certain specialty lines to directly interface with carriers, access these intermediaries to place client risks that fall outside their core industry focus, better enabling them to compete with larger agencies. For example, a retail broker that specializes in restaurant professional liability can place workers’ comp risk for a one-off dental practice.
Programs (26%; 31%): Brown functions as a managing general agency, wherein the insurance carriers delegate underwriting authority to the Company (Brown selects the program risks and sometimes adjusts claims and collects premiums) for policies catering to professions and trade groups like dentists/lawyers/architects/municipalities/CPAs. The Company distributes policies through both its own network and independent agents and is mostly compensated through commissions.
The two largest businesses within Programs are Arrowhead (a standard MGA with a large presence in southern California whose insured risk pools include commercial package insurance for automotive aftermarket, architects and engineers, quake, workers comp for select industries) and Wright (a national flood underwriter), both recent acquisitions that I estimate contribute 36% and 24% of segment revenue, respectively, and have generated > segment organic growth.
Wright requires some explanation.
In May 2014, Brown acquired Wright Insurance Group for ~$550mn. As the largest participant in FEMA’s “Write Your Own” Program (WYO) with 19% market share, Wright Insurance, while nominally a flood insurance carrier, is economically a commission-based service provider for the National Flood Insurance Program; that is, Wright does not bear underwriting risk. Flood insurance rates have been increasing at 7% annually over the last 10 years and legislative proposals to bring rates up to actuarial sound levels portend further improvements.
A class action lawsuit was filed a year ago against Wright and U.S. Forensic, a third-party engineering firm contracted by Wright to assess certain Sandy claims. It alleges that engineering reports were manipulated with the intent to underpay claims for “hundreds” of policyholders. A number of WYO flood carriers are under similar scrutiny. A panoply of questions, including who, if anyone, bears culpability (the contracted engineering firm that allegedly falsified reports, the carrier who engaged it, or FEMA) remain outstanding.
Despite the political grandstanding and media coverage, underpayment of Sandy claims from bogus assessment reports seems rather inconsequential in perspective. The 1,500 payments-related Sandy cases being heard in New York represent about 1% of all Sandy flood claims and less than half of that 1% are related to the engineering review process in question. It appears that the lead plaintiff in the class action was underpaid by ~$150k. Multiplying these proportions by this payout through the 20k Sandy claims processed by Wright gets us to a $15mn true-up, ~2.5% of EBITDA. Assuming another 4x that in punitive damages results in a total liability of $75mn, less than 2% of BRO’s market cap.
Besides the standard legal boilerplate that management has included in its filings professing immateriality of lawsuits, here are a few more assurances for you: A.M. Best reaffirmed Wright’s A- (“Excellent”)/Stable rating in July; FEMA renewed Wright’s WYO carrier status in October; and “no new legal proceedings, or material developments with respect to existing legal proceedings” have occurred through 3Q15.
Wholesale Brokerage (14%; 16%): This segment sells excess and surplus commercial insurance to retail agencies on a commission-basis. Of all Brown’s segments, wholesale is most closely tied to coastal property, which, in light of the 15-25% coastal pricing declines makes this segment’s high-single-digit organic revenue growth and company-leading 36% EBITDA margins that much more impressive.
Services (9%; 6%): Provides insurance services for fees that are not directly tied to general premiums. The two main businesses here are choppy, high-margin claims administration revenue and fees for Medicare services. The former gifted and subsequently thieved accelerated organic revenue growth and margin expansion post-Sandy, and, given the absence of major storms, currently operates at depressed levels.
Compared to BRO, the market has rewarded AJG’s faster organic growth with more generous earnings multiples in recent years while ignoring the former’s consistently superior profitability. Brown’s employees are paid a smidge less than Gallagher’s, but generate comparable productivity on lower overhead, driving a higher profit spread per employee. If the last decade is any guide, Gallagher must deliver 50% more revenue than Brown to generate the same EBITDA dollars.
 Brown’s amortizable intangible assets consist of purchased customer accounts and non-compete agreements from acquisitions. These are non-cash/non-economic charges that I add back to GAAP earnings to arrive at a truer measure of profitability. At ~$0.60 per share vs. $1.70 in LTM GAAP earnings, this adjustment is meaningful.
 Applying 80%/10%/10% weights on base/bull/bear case scenarios.
 2014 Business Insurance survey.
 Agents technically represent insurers; brokers, insureds….but the functional distinction is blurry and I use the terms interchangeably.
 The 100th largest broker generated $26mn in commissions according to the 2014 Business Insurance survey.
 Most agencies have a mix of personal and commercial lines. I calculate the number of commercial agencies by multiplying the number of agencies in each revenue cohort by each cohort’s mix of commercial revenue – think of it as a “commercial agency equivalent” number. My dollar value estimate is calculated as the product of commercial agency equivalents and the mid-point revenue of each revenue cohort.
 Source: 2014 IIABA Best Practices Study. On average, for agencies that generate between $2.5mn and $10mn in revenue, the largest shareholder has close to 60% ownership and is around 57 years old.
 The Company estimates the fair value of earn-outs on acquisitions, which are reflected on the balance sheet as part of the purchase price. Changes in estimated value flow through the income statement as non-cash charges/gains. Historically, these changes have been nominal and I exclude them from EBITDA. I include the fair value of earn-outs in the enterprise value.
 Adrian Brown, who founded the Company in 1939 with his cousin Charles Owen.
 Roberts, Sally. “Longtime agency chief J. Hyatt Brown retires.” Business Insurance. Web. 5 July 2009.
 (Through grants, open market purchases, or participation in the Company’s employee stock purchase program).
 By comparison, management and the Board at competitor Arthur J Gallagher together own less than 1.6% of shares.
 Brown’s in-house sales and leadership program.
 For example, a broker might place construction worker’s comp up to a threshold liability with one carrier and excess layers with another.
 A June 2013 Boston Consulting Group Commercial-Insurance survey shows that for a typical US insurer, 80% of premiums are derived from 20% of brokers/agents.
 Businesses with between 20 and 1,000 employees. Brown & Brown – J.P. Morgan Insurance Conference, March 18, 2015; U.S. Census Bureau; U.S. Small Business Administration.
 Personal auto, for instance. With Americans logging over 3 trillion vehicle miles per year, there are robust datasets on claims experience and accident frequency across demographics and geographies.
 Umbrella liability policies, in particular, can vary wildly from one carrier to the next depending on risk type and industry, and are often negotiable.
 Hoying, et al. (Nov 2014). Mining the Untapped Gold in SME Commercial Insurance. The Boston Consulting Group, 7-8.
 A business owner who reconciles his personal taxes with Turbotax will still outsource those of his 40-person business to an experienced CPA.
 Hoying, et al. (Nov 2014). Mining the Untapped Gold in SME Commercial Insurance. The Boston Consulting Group, 5-6.
 In 2012, personal lines insurers GEICO, State Farm and Progressive were among the top 25 most advertised US brands, edging out Home Depot and Budweiser; this, according to the Ad Age Datacenter analysis of U.S. measured media spending from Kantar Media.
 BRO defines small group as < 100 employees. Organic declines in small group have been offset by growth in its large group business.
 (Who have mostly seen success in small businesses with fewer than 50 employees). Perhaps the most salient example is the hyperbolic growth of Zenefits, a San Francisco-based cloud software company that brokers health insurance for small businesses at a 5% commission rate. In California, the company is the largest Anthem broker for companies with fewer than 50 employees. Mind you, Zenefits’ growth profile says nothing about its ultimate efficacy as a business. Several brokers I contacted believe that this “software company disguised as a well-informed consultant/broker” is experiencing significant customer churn as clients become increasingly disillusioned by the lack of benefits expertise and service. With < 10% of Brown’s revenue and firmly negative profitability, Zenefits’ last funding round valued the company at $4.5bn, 85% of Brown’s total enterprise value. That I think this excessive may speak to my lack of imagination; at a similar revenue multiple, BRO would be worth…just kidding. Benefits brokers are combatting the Zenefits threat by increasingly white-labeling cloud-based interfaces of their own.
 In its most recent quarter, ANTM witnessed a 30% “downward migration” relative to its expectations as the company continues to lose share to competitors who are pricing risk uneconomically and unsustainably.
 Public exchanges require scale and full participation to function effectively and current estimated public exchange enrollment for 2015 of 9mn-10mn lives is well short of the Congressional Budget Office’s original estimate of 15mn. UNH recently announced that it is considering withdrawing from exchange participation after major losses from policies sold through the exchanges. From Stephen Hemsley, UNH CEO on 11/19: “In recent weeks, growth expectations for individual exchanges have tempered industry-wide, co-operatives have failed, and market data has signaled higher risks and more difficulties while our own claims experience has deteriorated…”
 Including Toni Jennings, who took a hiatus from 2003-2006 to serve as Lieutenant Governor of Florida.
 J. Powell Brown (joined in 1995); Sam R. Boone (1987); Linda S. Downs (1980); Richard A. Freebourn (1984); Robert W. Lloyd (1999); Charles H. Lydecker (1990); J. Scott Penny (1989); Anthony T. Strianese (2000); Chris L. Walker (2003); R. Andrew Watts (2014).
 c.f. 15 at AON and 13 at MMC, significantly larger and more complex global entities.
 $12k to the Chairman for business entertainment expenses and club membership dues; a forgivable $500k loan to P. Barrett Brown, an SVP and brother of the CEO; $141k in fees for corporate aircraft owned by a family managed LLC are some notable ones.
 BRO has not used its stock as acquisition currency since 2001.
 From 1989 through 2013, BRO’s net leverage ratio ranged from -1.1x to 0.7x.
 As you might expect given the brokerage industry’s people-heavy/asset light structure, most of the purchase price on agency acquisitions is assigned to goodwill and intangibles.
 He wouldn’t reveal the number, but I estimate it to be originally set at around 6%-7%, pre-tax.
 The table up there is somewhat misleading in that it co-mingles contribution margins and acquired margins; but given the lackluster organic growth over most of these time spans and the high variable cost nature of the brokerage business (i.e. low operating leverage), the table should be reasonably representative of acquired margins.
 “2008 was the third largest in company history with $115.4 million in forward annualized revenues. Forty-three of the 45 transactions were in the property and casualty and employee benefits retail agencies. Perhaps the most important quality of these acquisitions is that the earnings performance are in line with our expectations and margins comparable to the existing operations….In 2008 we continued to strength our M&A transaction capability. We added to our staff increasing the number of individuals in our legal, financial, and quality control teams to handle an increased number of transactions given the opportunity.” – Jim Henderson, BRO COO; 4Q08 Earnings Call, 2/17/2009.
By comparison, in 2007, when Brown’s stock was trading at ~9.5x, acquisitions were executed at 6x-7x, translating into high-teens after-tax unlevered returns.
 Target productivity and margins reflect those of an independent agency in the $2.5mn-$5mn bucket per the 2014 IIABA Best Practices Survey.
 BRO’s current leverage ratio is well below peers and unlike AJG, AON, MMC and WSH, the Company has no pension liabilities.
 The most onerous debt covenant requires a net debt to EBITDA ratio back to 2.5x within 12 months of a leverage event.
 (Most have been done under accelerated share repurchase programs at a weighted average price of $32.13).
 (In absolute dollars and as a percent of market cap).
 BRO defines organic growth for core commissions and fees the same way an honest retailer defines “same store sales” growth. It excludes contingent commissions and large books of business from newly hired producers.
 Gross premiums less premiums ceded to reinsurers; primary rates mirror reinsurance pricing over time.
 Soft markets are those characterized by falling insurance rates.
 Even in a soft rate environment like 2004-2005 and 1998-1999, BRO posted positive organic growth as a strong economy translated into more insured units.
 Association Health Plans allow small employers in the same industry to pool their risks, taking advantage of scale economies in buying to offer health benefits to their employees. To qualify, an association must meet two criteria; it must not: 1) exist solely for the purpose of selling insurance to its members (the “bona fide” association hurdle) and 2) charge different rates for different risk profiles. Early this year, the State of Washington disapproved a whole slew of these plans for violating at least one of these two rules.
 The precipitous price declines – a function of 1) catastrophe-absent storm seasons driving record low loss ratios and record high carrier surpluses and 2) yield-starved private equity / hedge funds capitalizing alternative insurance vehicles – have persisted for several years.
 An overlooked silver lining is that ~3.5% of Brown’s revenue and ~8% of EBITDA comes from high-margin profit-sharing commissions tied to carrier profitability (which, in turn, is negatively correlated with storm activity).
 Services segment (9% of revenue) LTM margins of 23% vs. 30% in 2013.
 From the Company’s Proxy filed 3/27/15: “The cash incentive amounts for Messrs. Powell Brown, Watts and Penny were calculated based on the following formula: [target cash incentive amount] times [100% plus the percentage change in earnings per share, without regard for change in acquisition earn-out payables and adjusted to exclude the net after-tax effect of those sales of offices that occurred during the fourth quarter of 2014].”
 Each state’s Philly Fed’s Leading Economic Indicator is meant to be to lead its Coincident Index by 6 months. Per the Philly Fed website,Leading Indicator = Philly Fed Coincident Index + state-level housing permits (1 to 4 units), state initial unemployment insurance claims, delivery times from the Institute for Supply Management (ISM) manufacturing survey, and the interest rate spread between the 10-year Treasury bond and the 3-month Treasury bill.
 These seem atypically chunky against the last 10 years but rather unremarkable in the context of the last 20.
 PAYX and ADP have nearly 600k and 600k+ clients, respectively.
 Winkler, Rolfe. “Highly Valued Startup Zenefits Runs Into Turbulence.” The Wall Street Journal 12 Nov. 2015: n. pag. The Wall Street Journal. 12 Nov. 2015. Web.
 Green/grey/red = bull/base/bear.
 I assume cost cut / revenue decline ratios comparable to 2008/2009.
 To just 33.6%, still at the lower end of management’s nebulous long-term 33-35% target.
 BRO manages over 50 programs w/ 40 carriers.
 $120mn in annual revenue, $76mn of which represented flood insurance and resides in the Programs segment.
 A cooperative arrangement between the P&C industry and FEMA in which 80 participating private carriers underwrite and process claims under their own banners but cede the risk to the federal government. The carriers receive expense reimbursement and commissions for every policy it sells.
 Around 20% of the flood market is operating at subsidized, actuarially unsound rates.
 Hugely reductive legal analysis, I know…I’m simply trying to scope the dimensions.
 (Quirky, unusual risk – excess workers’ comp, garage, inland marine, jewelry – that is hard to place in traditional markets)
 I estimate ~1/3 of commissions.
 TPAs act as an extension of an insurer’s claims department, providing critical processing capacity for claims payments and customer service during disasters, when carrier resources are strained.
 Segment organic growth in 2012/2013/2014 were 8.6%/12.2%/-8.1% while EBITDA margins were 26.3%/30.4%/22.8%.
 Brown offers employees a more generous purchase discount on stock purchase plans, which is not reflected as compensation on the income statement. Adjusting for this, however, does not make a material difference.
 Brokers are variable cost heavy, so the contribution margins relative to extant ones on low-to-single digit organic growth are quite nominal. Plus, drawing from AJG management comments, I believe the organic growth bogey for margin expansion is higher at AJG than it is at BRO, and even if that weren’t the case and Gallagher’s greater organic growth profile were in fact yielding strong contribution margins, it would only make my hypothesis – that AJG is acquiring companies with substantially and sustainably lower pro-forma margins than BRO – stronger.
 Per AJG 2Q15 earnings conference call.
 “Let’s call it what it is. There are a lot of acquirers out there, some of whom are paying what we might consider ridiculous prices in certain transactions.” – J. Powell Brown, CEO of BRO (2Q15 Earnings Conference Call
“…it is a competitive market, but I would say the difference is really when you look back four or five years ago, the multiples were lower than they are today, so it’s very important for us to understand that.” – Daniel S. Glaser, CEO of MMC (3Q15 Earnings Conference Call)
“It is true that certain assets are being priced up particularly those where the private equity industry is also a potential buyer. So, you have obviously seen some of the things going on and going around the North American regional brokerage market where you often do see very, very high prices being paid.” –Dominic Casserley, CEO of WSH (3Q15 Earnings Conference Call)
Looking at TRUP is like staring at one of those ambiguous images that could be both a rabbit and a duck, both a saxophonist and a woman’s face: we know that this is an insurance company, but we’re compelled to analyze it as a data-driven subscription service.
Of course, all responsible insurers are data-driven and the recurring nature of premiums also make them subscription-like, but we don’t typically think of insurers as a subscription service in the same vein as a SaaS enterprise. We do for TRUP largely because its management team has diligently trained us to focus on SaaSy metrics. Some of this is just reframing vocabulary: “subscription fees” not “premiums”, “members” not “policyholders”, “Territory Partners” not “agents”.. Trupanion’s ignorance of the insurance industry lexicon – scarcely a mention of reserves, underwriting leverage, medical cost trends, book value – is so obtrusive as to almost certainly be by design (what fast-growing enterprise wants to be seen as a boring insurance company?).
But this is also the first insurer I’ve see that disaggregates retention by member cohort and discloses lifetime value to customer acquisition cost ratios. That’s not a knock on the company. The first principle to any recurring, subscription-like model, insurance company or not, is onboarding customers for far less money than those customers will generate in lifetime profits. With most of the stuff you buy – a haircut, an iPhone – there is little confusion about the value of the product to you. Insurance is a lot squishier because you don’t know at the time of purchase whether you’ll need it. For major categories of insurance – where the covered thing is monetarily significant and its cost readily determinable, as in the case of a car or house, or where it transcends monetary value, as in the case of your family’s health – we easily buy into the collective understanding that in any given year, the premiums from those on whom fortune smiles subsidize those on whom she frowns. We don’t feel like our rights are being trammeled when law mandates we buy such insurance or that we’re being bamboozled when our insurer earns an underwriting profit from this scheme. We’re risk averse and understand that peace of mind is worth paying for and that an insurer should be compensated for giving it to us. But dogs and cats?
While there’s no question that we treat our pets far more humanely than we used to and that pets have graduated from the status of mere property, they still don’t occupy the same sanctified hemisphere as humans and we’re far from consensus on the range of seriously unfortunate health outcomes that we should be willing to prepare for. If you ask your friends about pet medical insurance, as I did, you’ll likely find that only a few have it, maybe half think it a reasonable purchase and the rest may outright scoff. Rather than pay $50 for a Trupanion policy with a 10% deductible, why not just put $40 in the cookie car every month as a sort of pet health savings account? If I’m shelling out $600 this year for a Trupanion policy and eat 10% of the costs, I need to think there’s a 20% chance of at least $6,000 in medical emergencies in the next 12 months for the policy to be “worth it”. [to put that in context, treating hip dysplasia for a Golden Retriever can cost anywhere between $2,000 (if diagnosed early) and $5,000 (if diagnosed late and a hip replacement is required)].
Of course, the decision to purchase insurance can be an emotional one that goes beyond sterile expected value calculations, and the more importance you place on your pet’s life and comfort, the less willing you are to roll the dice…but the point is that on the surface, it’s not entirely clear to me pet owners feel they need insurance for their pets. But can they be made to think they need it? There’s some evidence to believe that they can: the number of pets covered by Trupanion has compounded by 25%/year since 2011 to over ~360k and nearly 85% of members renew their policies every year…and half the lost profits of those who don’t renew are offset by existing members who insure more pets or refer their friends. The proselytizing efforts begin with the 40,000+ vets staffed at the 28,000 vet hospitals across North America (20,000 of which are independently owned and operated) who deliver ~54% of TRUP’s new members and from whom pet owners seek trustworthy guidance. [According this recent Motley Fool interview, Trupanion’s CEO claims that when vets recommend Trupanion to their clients, 1 in 4 people enroll.]
Sometimes oblique coverage restrictions, annual payout caps and long waiting periods for covered treatments are buried in fine print; other times, the insurance company and the vet charge based off different fee schedules, with the pet owner paying for the entire procedure out-of-pocket based the vet’s fee schedule only to be reimbursed weeks later by the insurance company using a lower “usual and customary” rate [which is based on fees charged by other physicians in the surrounding area for the same procedure]. A vet probably won’t be blamed for not proactively recommending pet medical insurance, but pushing a policy that culminates in an expensive “gotcha” moment is poison. Trupanion attacks these causes of friction and confusion by:
1/ pricing off the cost of care. Trupanion carefully estimates the cost of medical care across 1mn+ dimensions – species, breed, zip code, deductible, age – and simply tacks on 30% to arrive at the policy price paid by the pet owner…so, a pet owner is basically paying a 30% premium above expected medical costs to rid herself of cost uncertainty. Trupanion then pays out 90% of the vet’s invoice, with no limits per claim or illness. So, it doesn’t matter if one vet charges $2,000 and a rival vet across the street charges $1,000; Trupanion will cover 90% of the eligible treatment cost in both cases. Assuming Trupanion has accurately estimated the cost of care, in aggregate, 70c of every premium dollar Trupanion collects goes to paying vet invoices;
2/ re-directing reimbursement flow (in progress). With traditional pet insurance, the patient covers the entire vet invoice upfront and then hopes the check that arrives from the insurer in 2 weeks will reimburse her for the “right” amount. While most of Trupanion’s claims are still paid via check, they are increasingly routed through Trupanion Express, in which Trupanion pays the vet 90% of the bill directly, thereby taking the burden of up-front payment away from the consumer. Express can be integrated into practice management software so that an invoice is immediately shot over to Trupanion, who wires the requested funds into the vet’s bank account in less than 5 minutes. The number of vet hospitals with Express installed has grown from 89 in mid-2014 to 500 in 2015 to ~1,300 today, with over 30% of vet invoice dollars channeled through Express, on its way to 95%+. No other competitor in the space is even bothering to pursue a similar direct payment scheme.
These two changes largely lift the confusion attending discrepant pricing schedules and alleviate the strain of what in some cases could be an enormous immediate upfront payment for the pet owner, followed by an anxiety-ridden reimbursement interval. The member knows his out-of-pocket burden from the get-go and will not be financially surprised down the line. And because a Trupanion member need not wrestle with the financial uncertainty of costly medical care, she spends twice as much on vet services over her pet’s life than an uninsured pet owner…and the vet can simply focus on recommending the best treatment, without also stressing over the owner’s ability to pay.
Even so, considering the history of disappointing experiences with pet medical insurance, it’s no wonder that winning over vets has proven a laborious process. It can take 3-5 visits for a Territory Rep to even get her first meeting…so, if the TR is making 1 visit every 6 months, we’re talking years. Trupanion makes close to 100,000 face-to-face vet visits every year, with 200 hospitals per territory visited every 60 days (with touch frequency now increasing with impending account manager build out), and even after hammering away at vet conversion for nearly a decade in the US, the company still has significant work ahead: against a universe of 25,000 addressable hospitals in the US, only 8,100 are actively recommending Trupanion today, a figure that is growing by ~500-600 hospitals/year. Competitors, on the other hand, continue to take a direct-to-consumer approach, carpet bombing their territories with online marketing to create awareness, which in the absence of vet buy-in has not proven very effective.
Building trust is a time consuming process that requires TRs to persistently contact vets, who must then observe positive customer experiences firsthand. These relationships cannot be bought, but must be earned over time: a vet hospital will not compromise a pet owner’s continuing business for referral fees and besides, Trupanion does not offer kickbacks of any kind to vets for referring patients. You may be surprised to know that with the exception of VPI Nationwide, the largest player in the space with 40% share (vs. #2 Trupanion with 20%), other competitors like Healthy Paws and Pet Plan don’t underwrite the policies they sell. Trupanion’s conceit is that by owning all links in the chain – from sales to underwriting to claims processing to customer service – and forgoing reinsurance, it can provide insurance at a ~20% lower cost than peers. These savings are used to cover a greater proportion of claims costs – 70% of premiums at Trupanion vs. closer to 50% for peers – enabling a “no-fuss” payments experience that induces greater satisfaction from pet owners (who remain Trupanion members for longer) and buy-in from vets (who feel comfortable enough to recommend Trupanion to new clients).
Of course, sustainably profiting off a cost-plus model and credibly delivering on the promise to immediately cover 90% of whatever invoice requires precise, granular insight into the cost of pet acquisition and medical care. Over 17 years since inception, with data from 1.5mn+ claims and over 500,000 invoices/year, Trupanion has amassed cost and retention experience across 1mn+ category permutations…so, for instance, the company understands how the claims experience of a 5-year old bulldog in zip code 11201 differs from that of a 3-year old Shih Tzu in 60047 and can price the two pets accordingly. There are no short cuts to this process. The time required to build claims experience and flesh out statistically significant patterns at such a granular level is a steep learning curve that even a well-funded competitor cannot easily surmount. Although VPI Nationwide has been around longer than Trupanion, their dataset is less robust because they don’t price their policies with nearly as many observations (zip code, for instance) as Trupanion, nor do they cover congenital and hereditary conditions. [to be clear, Trupanion doesn’t cover pre-existing conditions either, but unlike other insurers, it doesn’t refuse coverage on all future illnesses arising because of pre-existing conditions].
Still, while data may be a competitive advantage in the early stages of penetrating a market niche, I’m not sure this in itself constitutes a real moat. Data has to be proprietary, valuable, and part of a self-reinforcing process (data network effects) for it to count as a sustainable edge. There’s a reason why you never hear insurers tout data as a unique advantage…there are diminishing returns to data as the relationship between price and insured risk doesn’t change all that much for granular exposures and eventually becomes common knowledge.
(the Lifetime Value of a Pet (LVP) to Pet Acquisition Cost (PAC) ratio that Trupanion reports every quarter is the blended output of explicit LVP:PAC targets across a slew of subcategories. So, while the lifetime value of, say, a 2-year old cat in Manhattan with a $1,000 deductible will differ from bulldog puppy in Pittsburgh with no deductible, Trupanion can toggle pricing and acquisition spend to get iteratively closer to a common IRR across subcategories, with no cross-subsidization between them. Ideally, the table would look something like this…
Not. quite. there. yet…
(Tables 13 & 14 from Trupanion’s 2016 Annual Letter)]
Meager pet insurance adoption rate in North America (< 2% of ~180mn dogs and cats) compared to certain Western European countries (25% in the UK, 50% in Sweden), is an oft-touted part of the bull case. Of course, one wonders why, when pet insurance has been available in America since at least the mid-80s, the disparity exists in the first place? I don’t really know. But one reasonable-sounding explanation I’ve heard is that in Western Europe, pet insurers launched by first winning over vets and those vets then pushed the product to consumers…whereas in the US, insurers started by asking “what price will pet owners pay for this thing called ‘pet insurance’?” and then reverse engineered a product without consulting the vets, yielding something that both consumers and vets hated.
In any case, it doesn’t really matter. I think we just want to see that the method to driving category adoption is sound. In an embryonic market, it’s up to pioneering companies to create the category. Pet medical insurance is so nascent in the US that although Trupanion continues to claim share – there are around 20 brands that make up the pet insurance space, but 2 players, VPI Nationwide and Trupanion, account for 60% of the insured pets – it does so in a market that, against the broader population of insurable pets, barely exists. Rather than look to foreign countries for cues, it seems better to just make a judgment call on whether a) the value proposition for vets makes sense, b) the company has the will and wherewithal to push the ball forward, and c) the product, when discovered and used by the end consumer, solves a real need (including a need the consumer previously didn’t even know she had). a) and c) are tied at the hip since, as previously discussed, vets will only pitch Trupanion if the pet owner perceives benefit. While I harbor doubts about the intrinsic value to a pet owner, those personal reservations are trumped by nearly a decade of data strongly supporting the claim that yes, pet insurance is becoming a thing in the US. As born out over many cohorts, the average life of a Trupanion member is around 6 years…
…during which period the company pulls 20c in variable profit from every incremental premium dollar, reinvesting most of that into acquiring new pets…
….at compelling lifetime values translating into huge IRRs, leveraging sales & marketing and fixed expenses along the way.
The IRR math works roughly as follows: on average, Trupanion pays $175 to acquire a pet and recognizes premiums of around $53 for that pet each month over 71 months. That 53 bucks is whittled away like so:
Monthly premium: $53
Vet invoices: ($37) [70% of the $50 premium]
Variable expenses: ($5)
Contribution profit: $11 [20% of premiums. The Lifetime Value of a Pet (LVP) is computed off this figure]
After adding back sales & marketing, Trupanion’s trailing 12 month EBITDA margin after stock-comp is ~8%, implying fixed costs of ~12% of premiums [20% contribution margins less 8% EBITDA margins ex. sales and marketing], so 60% of contribution profits are being consumed by fixed operating costs at the moment.
But at scale, which management pegs at ~700k pets (vs. over 360k today growing low/mid-teens y/y), the company thinks it can do 15% adjusted operating margins excluding the cost of adding new pets. When we back off 1%-2% for stock comp, it’s maybe more like 13%. Given the degree to which Trupanion has leveraged its cost structure over the last 6-7 years, I find this claim credible.
And so, at scale…. .
Fixed expenses: ($3)
Capital charge: ($0.6) [8% x (monthly premium divided into a premium:surplus ratio of 6x)]
Profit/pet/month: $7 [13% of premiums]
In other words, the company is generating ~$520 in profits over the average lifetime of a member, around 3x the cost to acquire that member. The cash flow streams over ~71 months impute a 65% IRR. Alternatively, at a 15% discount rate, the present value of cash flows over of a subscriber’s life is ~$330, nearly twice the cost of acquisition. Imputing attractive unit economics, of course, requires a sufficiently wide LVP-PAC spread. The $175 pet acquisition cost that I am using assumes the current LVP/CAC of 4.5x, where it has roughly been for the last 5-6 years. This figure would have to decay to below 2.5x before the NPV of pet acquisitions turns negative (i.e. destroys value).
But I see no reason to expect such a draconian deterioration and in fact, it’s possible that as the company further densifies its markets with vet allies, layering on radio and TV spend may even boost conversion from vet recommendations, as has been true thus far in Trupanion’s mature markets where half or more of vet hospitals actively recommend Trupanion.
Given the unit economics and the runway, it’s not hard to see how things can get interesting [though whether they can get interesting enough to buy the stock at today’s valuation is an entirely different question]. Assuming today’s annual PAC spending of ~$15mn/year grows by a few million per year and even assuming LTV/CAC deflates to below 4x, I can get to just over 800k enrolled pets in year-7, around 12% growth per annum. Absent some “silver bullet for cost effective accelerated organic growth” (company’s words), management seems determined to fund its expansion entirely with internally generated profits.
Applying a pre-PAC scale margin of 13% to implied revenue gets us to around $70mn in pro-forma EBITDA, which, assuming reinvestment, will be growing by low-teens, providing still more ammo for pet acquisition. With 800k enrolled pets against a universe of 180mn dogs and cats in North America, Trupanion will still not have even nicked the surface of its potential…so, still be plenty of opportunity to deploy maybe 40%-50% of pre-PAC EBITDA into onboarding pets at 40%+ IRRs. In theory at least. What you’ve just seen is the rabbit. But what about the duck?
After all, this is still an insurance company and balance sheet strength reigns paramount. But is it and does it? Trupanion’s underwriting leverage – premiums to surplus ratio, a measure of how much underwriting risk you are assuming relative to the capital you hold – is greater than 6x [Trupanion does not reinsure its risk, so subscriptions = gross premiums = net premiums]. Whether this exceeds the standard of prudence depends on the nature of risk being insured. Typically, a ratio greater than 3x is considered unusually aggressive for a P&C underwriter and an insurer with significant high-severity natural catastrophe exposure will keep it closer to 1x. Health insurers, in contrast, will underwrite 7x+ their surplus. In my experience, growth and conservative underwriting are hardly ever simultaneously executed well together, and if this were a run-of-the-mill P&C underwriter, Trupanion’s thin capital base would probably be reason alone to pass.
But I think Trupanion is a different animal. The risks covered under pet medical insurance are bite-sized and absent a major, widespread health contagion, uncorrelated. Agglomerating hundreds of thousands of claims results in a more predictable range of experiences from year-to-year at the population level than most traditional P&C exposures and with far less tail risk too. Given the highly granular, uncorrelated nature of the insured risks, catastrophe is a remote risk. The company’s exposures are also short-tailed, meaning that claims-triggering events are readily apparent and the costs from those events accurately estimable and paid soon thereafter: over 90% of the company’s reserves as of 3q17 are related to activities incurred in 2017 and close to 95% of claims paid over the last year was related to business underwritten in the same year. Only a miniscule amount of claims incurred and paid relate to prior years. [one negative consequence to premiums heading out the door to pay claims almost as soon as they come in is that Trupanion doesn’t benefit from “float” income as a typical insurer does].
Compare this to the “long-tail” risk of asbestos, where the health consequences from exposure remained latent for many years and claims were still being paid more than decade after policies were originally underwritten…that is, people were getting silently screwed by asbestos during the coverage period; the insurance companies didn’t know it at the time and so didn’t properly reserve for it.In contrast, the short-tail nature of Trupanion’s risk means that its best guess about claims cost and frequency is rapidly (in)validated and any deviations can be dynamically accommodated through price adjustments. While Trupanion won’t hike a member’s monthly subscription fee based on her pet’s individual medical condition, it will do so if the average cost of care for all pets within the same sub-category rises, so systematic pricing errors are quickly rectified. Steering the ship is 48-year old founder, CEO, and ~8% owner Darryl Rawlings, who has a good story about how his parents’ experience about not having the money to remedy a life-saving procedure for his childhood dog, prompted him to start Trupanion 10 years later (maybe too good a story?). In any case, I highly recommend reading his annual letters, which are refreshingly exorcised of hygienic corporate bullshit and lay out Trupanion’s operating strategy with a useful degree of granularity. Darryl seems authentically enthusiastic about this pet insurance mission and appears to “get” how value is created…it’s hard to imagine a diversified underwriter/bank/savings institution like Nationwide pursuing this opportunity with the same single-minded vigor.
[MCO – Moody’s] The Self-Reinforcing Standards Moat
Moody’s is a Nationally Recognized Statistical Rating Organizations (NRSROs), a title bestowed by the SEC on a handful of credit rating agencies, the top 3 of whom act as an oligopoly in the US debt ratings gambit. As you well know, Moody’s (and S&P and Fitch) fell into disrepute during the last financial crisis when its ratings on vast swaths of corporate and securitized paper proved worthless, its grossly conflicted issuer-pay model laid plainly bare. But testament to the company’s resilient business model, and toothless fines and regulatory censures notwithstanding, Moody’s Investor Service (“MIS”, the credit rating agency side of the business that constitutes ~2/3 of revenue and ~85% of EBITDA) has thrived since the crisis, compounding revenue and EBITDA by 10% and 15%, respectively, since 2009 and generating more of each vs. the 2007 peak:
MIS segment, $ millions
Over the last 100+ years since its founding, Moody’s ratings – derived from a consistent framework applied across 11k and 6k corporate and public finance issuers, respectively, in addition to 64k structured finance obligations – have become the veritable benchmark by which market participants, from investors to regulators, peg the credit worthiness of one debt security against another. NRSRO ratings underpin the risk weightings that banks attach to assets to determine capital requirements, dictate which securities a money market fund can own, and, in ostensibly surfacing the credit risk embedded in fixed income securities, make it easier for two parties to confidently price and trade, enhancing market liquidity. I was a research nerd in the bond group at Fidelity just prior to and during the crisis. It’s hard to overstate just how tightly Moody’s and S&P (and to a lesser degree, Fitch) ratings were stitched into the fabric of our ratings and compliance infrastructure and the day-to-day workflows of analysts and traders on the floor.
Because of such industry-wide adoption, a debt issuer has little choice but to pay Moody’s for a rating if it hopes to get a fair deal in the market: an issuer of $500mn in 10-year bonds might pay the company 6bps upfront ($300k), but will save 30bps in interest expense every year ($15mn over the life of the bond)….and each incremental issuer who pays the toll only further reinforces the Moody’s ratings as the standard upon which to coalesce, fostering still further participation. This feedback loop naturally evolves into a deeply entrenched oligopoly. In terms of total ratings issued, S&P and Moody’s are at the top of the heap. There are actually 10 NRSROs, but unless you work in credit, you’ve probably never heard of most of them (Egan Jones anyone?)
The government’s determination of NRSRO status is premised on “whether the rating agency is ‘nationally recognized’ in the United States as an issuer of credible and reliable ratings by the predominant users of securities ratings” (per this SEC report), which criteria itself is in part tautologically attributable to the government’s NRSRO designation in the first place. And when things go horribly wrong and these ratings are shown to be the reactive measures that they are, the agencies simply appeal to freedom of speech protection under the First Amendment.
This is a really hard business to screw up. Who wants to rock the boat? Certainly not the staid management team at Moody’s, which thrives on 5 year plans, formulaic capital allocation policies, and farcically granular guidance that plays to the myopic expectations of sell-side model tweakers (though I give management props for expensing stock comp in its adjusted profit numbers). You will never see Moody’s carve out an “Other Bets” P&L for new innovations. Day One will always be yesterday. [If watching Sundar Pichai saunter on stage to fulsome fanboy applause against jubilant theme music from Fitz & The Tantrums provokes reflexive eye-rolling, then do yourself a favor…watch the 2016 Moody’s Investor Day webcast and take refuge in the sterile quietude of a generic albescent conference room where every cough and throat clear is awkwardly amplified against the AV projector’s fan’s sad whir.]
MIS’ 2016 revenue was about 60% transactional (tied to new debt issuance) and 40% “recurring” [per 10K: annual fee arrangements with frequent debt issuers, annual debt monitoring fees and annual fees from commercial paper and medium-term note programs, bank deposit ratings, insurance company financial strength ratings, mutual fund ratings], a mix that has been reasonably stable during the quiescent issuance environment of the last 5-6 years. Debt issuance in the US, which constitutes nearly 2/3 of MIS revenue, can be choppy from year-to-year….
…but the overall stock of debt has been steadily growing…
…so, as you might expect, MIS’ recurring revenue has served as a reliable anchor during stormy issuance periods.
MIS segment, $ millions
Still, recurring profits did little to cushion the punishing issuance swoon during the last recession. Revenue from corporate and structured finance bond issuance declined 26% and 53%, respectively, from 2007 to 2008, forcing a ~$575mn revenue decline that translated into a $450mn EBITDA hit.
MIS segment, $ millions
* excludes a negligible amount of “other”
We don’t know the profit split between transactional and recurring profits (and I don’t even know if such a determination is possible since labor is the biggest component of SG&A and allocating the cost of an analyst’s time between new issuance and maintenance work feels like arbitrary hair splitting). But, I think we can confidently say that non-recurring revenue per dollar of new issuance is way larger than recurring revenue pulled from each par dollar of the rated installed base, and so big swings in transactional revenue have a disproportionate impact on profitability…though, keep in mind that heavy debt issuance in a given period adds to the stock of outstanding debt and thus the monitoring fees earned in future periods.
Given the lofty contribution margins attached to new issuance, the prospect of a reversal has been a source of trepidation for me. Transactional revenue growth has proceeded at a strong, though not torrid, 12% pace over the last 6-7 years as issuers have seized on a stubbornly low rate environment to refinance debt and add leverage to their balance sheets.
Meanwhile, outside a commodity-driven hiccup in 2016, high yield default rates are well below the historic average (which should give you pause if you believe in cycles and mean reversion).
[Aside: the below exhibit, which breaks out the uses of funds from high yield bond and bank loans, is interesting in its own right. In the late 1990s, 20%-25% of companies that raised funds cited internal investment as a reason for doing so vs. just a mid-single/high-single digit percentage today.]
I’m being unhelpfully obvious when I say that credit conditions feel toppy. But even if mean reversion is impending, 2008/2009 seems an inappropriate analog since not only is the catalyst driving systemic financial concerns that loomed so large back then less relevant today, but also a significant chunk of the company’s pre-2008 profits came from its reckless rubber-stamping of toxic asset-backed securities.
(MIS revenue, $mn)
Disaggregating MIS’ revenue streams per above, we see that outside of structured finance, the revenue declines were actually not sooo bad during the worst financial crisis in decades, thanks of course to issuance stoked by aggressive rate-deflating monetary policy measures. Structured finance grew from just $384mn in revenue in 2002 to $873mn in 2006 (an 18% CAGR) and was so profitable that even while non-SF revenue grew by 14% in 2009, overall MIS EBITDA still declined as SF revenue contracted by another 25% from 2008’s harrowing 53% decline. I don’t believe MIS has significant revenue streams tied to comparably negligent and profligate underwriting today, and would expect the profit hit from a cyclical correction to be far more muted.
Also, due to the surge in 7-10 year paper subsequent to the financial crisis – MIS’ non-structured revenue increased by 17%/yr from 2008 to 2012 – the refinancing needs over the next 4 year period (2017 to 2020) are 30% greater than they were from 2013 to 2016, providing an intermediate tailwind to transactional revenue, though 1h17’s whopping 30% y/y growth in corporate finance revs is clearly testament to some pull-forward of refinancing needs. Debt issuance cycle aside, companies have been increasingly tapping the capital markets, rather than banks, for their debt funding needs. In Europe, bonds constitute just 23% of non-financial debt [bonds + bank loans] outstanding vs. 52% in the US, with the mix shifting in favor of bonds over at least the last decade.
Management thinks that disintermediation (+2%-3%) plus debt issuance prompted by global GDP growth (+2%-3%) plus pricing (+3%-4%) should sum up to around ~high-single/low double digit revenue growth through the debt cycle, which sounds reasonable to me and is consistent with the 9% revenue CAGR MIS has realized since 2011. And on top of that, there’s another 2%-3% contribution from Moody’s Analytics, MCO’s less good business segment that offers a range of risk management, research, and data products and services, and constitutes about 1/3 of revenue and 16% of EBITDA (corporate overhead is already allocated to business segments). Almost all of the ~$1bn that the company has spent on acquisitions (out of cumulative free cash flow of ~$8bn) over the last decade through 1q17 has gone towards bolstering MA, mostly small tuck-ins.
Then, on May 15, 2017, management announced the €3bn acquisition of Bureau van Dijk. Moody’s is spending 3x more on this one acquisition than it has on the sum of all previous acquisitions over the last decade. BvD is an Amsterdam-based company that aggregates data on 220mn private companies across a wide range of geographies and industries and makes it available in hygienic, organized form to 6k corporate and government customers. This acquisition will be “tucked into” RD&A [In 2016, about 54% of MA’s revenue came from “Research, Data, and Analytics,” which is really just an extension of MIS insofar as it realizes revenue by selling research and data (analysis on debt issuers, economic commentary, quantitative risk scores, etc.) generated in MIS. The quality of RD&A mirrors that of the ratings segment, with 95% retention rates driving hsd revenue growth (90% organic) from hsd pricing and volume since 2011], boosting its revenue by ~43% (and contributing ~8% to MCO’s total revenue). BvD does not own the data, but rather licenses it from 160mn obscure data providers in various jurisdictions before “cleansing” and standardizing it for subscribers who use it to, for instance, better assess credit risk, conduct M&A due diligence, set transfer pricing reporting policies and docs for multinationals, and identify potential B2B sales leads.
Management claims that this business benefits from network effects, by which I assume they mean that the license fees BvD pays to suppliers are pegged to the number of users of that data and so more users compel more suppliers to make their data available to BvD, which in turn draws more users. Going off the high-level historical financials provided by Moody’s, BvD has performed like a truly kick-ass asset, with revenue expanding at a steady 9% CAGR (all organic) over the last decade, growing every year right through the recession, and EBITDA margins expanding from 39% in 2006 to 51% in 2016. But great assets go for great prices. MCO is paying a lofty 12x revenue and 23x EBITDA at a time when its own stock traded at “just” ~14x at the time of announcement. €3bn is triple what private equity firm EQT paid for BvD less than 3 years ago. One might argue that if we extrapolate the last decade’s 12% annual EBITDA growth out 5 years (which might actually be reasonable given the seemingly predictable, consistent nature of the business) and apply estimated out-year synergies ($40mn revenue / $40mn costs), we’re looking at €295mn in 2021 EBITDA, which puts the multiple at ~10x, but even management concedes that it is reaching on valuation and falling short of their typical 10% cash yield target on this one. The revenue synergies seem fairly modest (14% of revenue, 5 years out) and sensible on the surface. For various reasons BvD has found it difficult to break into the US market (unlike regions outside the US, financial data on private companies in the US is sparse…plus, BvD who?) and still derives 3/4 of its revenue from Europe.
Moody’s can bundle BvD’s datasets into MA’s analytics products and sell a more robust bundle to its US customer base. [Notably, MA already feeds BvD’s data into the loan origination solution it sells to financial institution clients and some MA customers already use BvD data to drive their credit models] and cross-sell MA products into BvD’s customer base. Finally, BvD’s dataset on smaller, private companies gives MIS the opportunity to provide credit ratings to the underserved SME market, though this seems like a more distant aim. [Here’s a high-level summary of Moody’s business mix post-BvD; MA gets a nice margin lift and its EBITDA increases from ~17% of consolidated to nearly 1/4.]
(MCO LTM + BvD 2016, $ millions)
Still, most of management’s justifications – the acquisition reduces the volatility of the ratings business, is accretive to per share earnings, accelerates growth forecasts, gets the company access to new revenue opportunities like transfer pricing and tax planning that have little to do with the core ratings business – have jack to do with value creation and reek of generic Wall Street pandering. And while BvD’s business seems good enough on its own merits that I don’t think the acquisition will be grossly value destructive, it’s tough to credibly claim that much incremental value has been added at this lofty purchase multiple.
Outside of RD&A, there are two other business lines: 1) Enterprise Risk Solutions (11% of post-BvD revenue; risk management software and services…basically, financial institutions use Moody’s tools to create credit, market, and operational risk tables and make them available to their regulators; has grown revenue by ~11% organically over the last 8 years) and 2) Professional Services (4% of post-BvD revenue; financial training and certification, mid-single digit organic revenue growth since 2008….seems like a pretty mediocre business, but one which management insists is an important entry point to the customer). Taken as a whole, Moody’s Analytics is just “meh” compared to other data and analytics peers, in my opinon. Great analytics businesses tend to have self-reinforcing data feedback loops, which are not very relevant to MA.
[Here is what I wrote about Verisk: The company sits at the center of a network that procures data from a wide variety of sources on one side (claims settlements, remote imagery, auto OEMs, name your buzz word – smart cars, smart watches, smart cities) analyzes it, and spits out predictive risk and customer insights to their clients on the other (insurers, advertisers, property managers). The agreements through which a customer licenses VRSK’s data also allows the company to make use of that customer’s data, so essentially the customer pays Verisk for a solution that costs almost nothing for the company to deliver and Verisk gets to use that customer’s data to bolster the appeal of its own products, which improved solutions reduce churn and attract even more customers (and their data) in a subsidized feedback loop.]
Its solutions seem more akin to templated reporting and risk management to sate regulatory requirements than data-fueled machine learning algorithms to drive business outcomes. Management continuously talks about realizing synergies from tuck-ins and driving operating leverage, but the fact of the matter is that MA margins have gone nowhere for years and I think it’s fair to say that this side of the company has disappointed expectations. So, stepping back…nearly 80% of MCO’s pro-forma EBITDA comes from a ratings business that has long established itself as the de facto credit risk benchmark, relied upon by all significant players in the fixed income ecosystem. But while MIS is a structurally advantaged business that will continue heaping value over time, because 60% of MIS is high-margin transactional revenue tied to new issuance, it is also unavoidably cyclical, and conditions today seem about as good as they will get. Through the cycle, MIS is a steady high-single digit revenue / low-double-digit EBITDA grower generating prodigious free cash flow (30% of revenue converts to free cash flow). Most of it will be mechanically dedicated to buybacks and dividends, which is probably just as well since its tuck-in acquisitions have had little to show, and I suspect the same will be true of BvD. At $134, the stock trades at 18x/23x my estimate of pro-forma LTM EBITDA/cash EPS. The EBITDA multiple is about as high as it has been in decades (matched only in late 2005/early 2006) on what in retrospect will likely turn out to be cyclically peak earnings. Moody’s is a great business and is priced accordingly, though with a long enough time frame, a buyer will probably do just fine even at the current valuation.
[CPRT – Copart; KAR – KAR Auction Services] Copart is a Beast + Impending IAA Spin-Off from KAR
SAMPLE POSTS,[CPRT] Copart,[KAR] KAR Auction Services |
Beneath the bustle of a used car lot lies a swarm of activity that few of us have ever seen, a thriving ecosystem of wholesalers, rebuilders, parts recyclers, insurers, and intermediators whose trading activities usher some vehicles to dealer lots for resale and others to repair shops or salvage auctions. Roughly 45mn used cars are sold in the US every year. 10mn+ of those are consumer-to-consumer private transactions that take place on eBay, Craigslist, and the like. The other ~30mn+ are captured by a fragmented base of over 40k used car dealerships (CarMax, Penske, Asbury, etc.) strewn across the country, 10mn of which are sourced through wholesale auctions…so, you might trade in your Chevy Bolt for a new Toyota Camry, but that Toyota dealer doesn’t want a Chevy sitting on the lot and so will sell that trade-in at a wholesale auction, through which the vehicle will find its way to a GM dealer. This being an auction market with two-sided network effects, wherein buyers attract sellers and sellers attract buyers, you will not be surprised to learn that value lopsidedly accrues to just two companies – KAR Auction Services, through its ADESA subsidiary, and Cox, through its Manheim subsidiary – who together process ~70%-75% of the market.
This is not the only oligopoly in the auto value chain. The salvage market is also an auction and, in the US, also dominated by two companies, Copart and Insurance Auto Auctions (IAA), who each have roughly ~40% share, with the next largest operator claiming just 3%. But unlike the whole car auction market, which is closely tied to transaction “flow”, salvage auto auctions are supported by the “stock” of outstanding vehicles, providing a stable source of volume as every year some reasonably predicable percentage of the ~280mn cars on US roads get into accidents and sometimes those accidents are serious enough that the cars involved must be pulled off the road entirely. Around ~13mn are removed from the fleet every year(1), of which 4mn are siphoned to salvage auctions that are more likely than not operated by either Copart or IAA. Insurers bring 80%-85% of the cars that run through these auctions, though no single customer accounts for more than 10% of Copart’s revenue. The other side, the buyer side, is more fragmented: sometimes dealers or repair shops buy the whole car and rebuild it, sometimes they dismantle it for parts and, in the case of LKQ, sell those parts to collision repair shops, who buy from LKQ to appease cost conscious insurers directing huge repair volume to shops using less expensive recycled parts for damaged cars.
I’ve heard LKQ described as a business with low entry barriers, but actually, it can be quite difficult to replicate the scale advantages of a disciplined distributor with relatively superior local route density. Tens of thousands of mom-and-pop collision and mechanical repair shops receive unexpected service requests that must be expeditiously handled. Under pressure from insurers, who account for 85% of car repairs and who expect rapid and cost effective service, shops will buy from distributors that can most rapidly deliver the broadest range of SKUs with the shortest lead times at the lowest price, so clustering facilities in such a way that they can be economically loaded with inventory from a regional hub and also near customer premises, affords scale advantages.
I may be cherry-picking, but it seems to me that logistics/distribution-type companies with the loftiest returns – Fastenal, MSC, Old Dominion, Copart, Rollins – strongly bias towards organic growth and small tuck-ins relative to peers who undertake splashy “strategic” or “platform” acquisitions. High return stalwarts expand by methodically adjoining adjacent service territories to an existing, heavily utilized logistics base. As I wrote in a prior post:
“FAST is a prime example of how disciplined incremental expansion into adjacent product and geographic space can build to significant competitive advantages over time. In the Company’s case, we see this along three different vectors: 1) geography – from its Midwest origins the company expanded into nearby surrounding territories, building local market density that offered scale advantages in distribution and higher service levels at lower cost to customers; 2) product – FAST leveraged its existing strength in fasteners to expand into non-fastener maintenance and supply; 3) business model – the company is now building on its legacy distribution and store network to offer onsite customer service at a cost that most competitors can’t match.”
Expanding incrementally and leveraging local incumbency is not the path that LKQ has taken. The company has grown its revenue base from less than $1bn in 2006 to over $10bn today largely on the back of acquisitions. It has done 260 of them since its founding in 1998, pushing its presence beyond North America to the UK and continental Europe; beyond recycled auto parts to aftermarket products and accessories distribution for specialty vehicles (RVs and trucks). I don’t know this company intimately well, but its growth strategy feels rather unwieldy, its operational discipline wanting. EBITDA margins in North America, its most mature segment, have expanded by just 80bps over 5 years while European margins have declined. A bull might claim that its profitability profile is distorted by acquisitions, especially in Europe, and that those margins are really expanding on a “look through” basis, but I could just as easily argue that a key source of value extraction in logistics roll-ups comes from layering incremental demand atop existing infrastructure and routes, so margins should be trending persistently higher, roughly concurrent with the tuck-ins themselves. Rollins, which regularly funds mom-and-pops with internally generated free cash flow; shuts down those acquired branches; and consolidates their customers onto existing routes, has seen its EBITDA margins inflate by over 300bps since 2012 on low/mid-single digit organic growth that is comparable to LKQ’s North America segment.
LKQ boasts #1 share in a variety of international markets, but the company purchased that distinction, and as auto parts distributors and the insurers who dictate repair flow operate locally, what is the synergy that LKQ brings to these far flung territories? In its hypothetical pro forma acquisition scenario slide, management claims it is buying companies at just under 5x year-2 EBITDA (8x trailing)…and yet the company’s consolidated returns on invested capital, at ~10%, are unimpressive and have gone nowhere for 14 years. Nor are the low-teens EBITDA margins in North America, the most mature and highest margin segment, indicative of what is typically delivered by a dominant, best-of-breed distributor with massive scale advantages.
Auto insurers are a pretty consolidated bunch, with the top 10 accounting for 70% of written premiums. They are offsetting claims cost inflation by pressuring repair shops to use recycled parts that sell for half the price but are nonetheless just as reliable as OEM replacement products, a significant demand tailwind for distributors of recycled salvage parts. LKQ’s high single digit national marketshare is many times larger than the next largest distributor – competitive differentiation derived from scale economies is relative; it is usually better to have 5% share of a market whose second largest player accounts for just 1%, than to be one of ten players who each have 10% – and so bulls have argued that LKQ is favorably positioned in a static Porter’s 5 Forces framework against the 10s of thousands of defenseless, commodified, and fragmented collision and mechanical repair shops that comprise its customer base.
But saying that LKQ has dominant share in North American parts distribution obscures meaningful nuance – i.e. a chain of repair shops may have negligible market share nationally but a dominant presence locally – and says nothing about the change in relative share. Insurance carriers are looking to streamline their claims operations and directing ever more repair flow to a limited number of service providers. This, combined with the growing complexity of auto repairs, which require shops to invest in costly specialized equipment and training, is in turn fueling consolidation among repair shops, whose ranks have dwindled from 65k in 1990 to around 30k today. Since 2000, the proportion of repair volumes handled by multi-shop operations (MSOs) has gone from ~10% to nearly 40%.
Against that context, LKQ’s national network may position it to win a growing share of broader distribution agreements from ever consolidated repair chains, but since the company would no longer be dealing with small mom-and-pops, its relative bargaining position is worse off. Nor does LKQ appear to have meaningful procurement leverage. Even as the largest buyer, LKQ does not comprise even 4% of the total cars sold through an auction industry that is run by just two players. And if you want to argue that LKQ has sourcing advantages outside of salvage, well, where is it? Gross margins have deteriorated over the last 7 years, from 44% in 2010 to 39% in the last 12 months.
I don’t think LKQ is a bad company; it just doesn’t seem as sweet an opportunity as typically pitched, in my opinion. To the extent that the bull thesis rests on responsible international expansion and a favorable view on the broader trends driving accident rates, Copart and IAA seem like superior bets.
About those trends. When a car gets into an accident, an insurer must determine whether or not the vehicle is worth saving. It does so by comparing the cost of repairing the car versus the amount it could recognize by deeming the vehicle a total loss and selling it at a salvage auction. So, if a car was valued at $10k before the accident and it takes $6k in repairs to restore the car to its “pre-accident value” and another $1k to supply the claimant with a rental while his car gets fixed, then the insurer will choose to send the car to salvage if it thinks it can get more than $3k for it at auction [that’s $10k less $6k less $1k], and repair the car otherwise.
The proportion of cars deemed a total loss has been trending higher over the last several years, driving growth in salvage auction volumes:
[In the early/mid ’90s, the total loss % was more like 10%]
For every 5 cars that get into accidents, 1 is sent to a salvage auction, where it might be claimed by recycler like LKQ who sells the parts used to repair the other 4.
Ever year, cars are being decked out with more and more safety features and technology – airbags, crumple zones, computer systems, cameras, sensors, navigation systems – making them more expensive to repair and more likely to be deemed a total loss when involved in an accident. Moreover, mounting consolidation among repair shops, which attenuates LKQ’s bargaining power relative to its customers, conversely helps Copart and IAA, as these MSOs exert pricing power and exacerbate the rising the cost of repairs. Meanwhile, the demand for cheaper recycled parts, which acts as a significant tailwind for LKQ, also helps Copart and IAA as recyclers participate in salvage auctions to source totaled cars for parts inventory.
Furthermore, the median vehicle age has gone from ~10 years a decade ago to ~12 years today while the percentage of really old cars in the car parc, those 11 years and older, have grown from 33% of the car parc to over 40%. Older cars are more likely to be salvaged because their repair costs make up a relatively high proportion of pre-accident value, all else equal.
Total loss claims will be further buoyed to the extent that the impending deluge of vehicles coming off lease pressures used car prices (“pre-accident” values will be lower).
[At the end of its lease term, typically ~3 years, a car is returned to the captive finance company, who funnels it to either the wholesale or salvage auction market. The growing proportion of leased cars in recent years – in 2010, less than 13% of new car transactions were leases; today, that figure is over 30% – continues to provide some support to salvage volumes, but keep in mind that captive fincos are a subset of the less than 20% of Copart’s auction volume derived from non-insurance companies, so it’s not a huge deal(2)]
So, repair costs and salvage auction participation are both going up while, speculatively, used car values are probably going down. The diverging trends provide a happy tailwind for salvage auction activity.
On top of all that, the number of car accidents has been rising along with the number of cars on the road (and correspondingly, the number of miles driven)…
…and after a 15 year stretch during which the number of crashes per million driven miles declined, the crash rate has ticked higher in recent years, supposedly boosted by texting/emailing while behind the wheel: over 30% of respondents from an AAA Foundation for Traffic Safety survey reported texting while driving in the last 30 days; the National Safety Council claims that 1 in 4 crashes is influenced by cell phone use. But, even if this troubling trend proves to be a fluke that reverses, salvage volumes should still continue to grow as they have in past decades, when persistently falling accident rates were more than offset by rising total loss frequencies (which latter are mainly a function of repair costs relative to used car prices, which I’ve already explained are trending higher).
You’ll notice that the constituent drivers of Copart’s revenue growth have naturally offsetting properties that promote growth even when times are bumpy for the auto industry as a whole. For instance, lower average used car prices, in isolation, are ostensibly bad since over half the company’s service revenue [fees charged to vehicle sellers and buyers] is correlated in some way to the selling price of the vehicle…but weaker prices also increase the proportion of repair costs to pre-accident values and makes it more likely that the a car will be totaled rather than repaired. In FY09 (ending July 2009), Copart’s service revenue shrank by less than 1% y/y, as higher salvage frequency and market share gains substantially counteracted the harrowing fallout of the last recession, including not just weaker used car prices, but the headwinds of dollar strength (around 30% of the cars at Copart’s auctions go to international buyers), commodity weakness, and a shrinking car parc. [Cars that are discontinued or built by manufacturers seeking bankruptcy protection, as occurred during the last recession, see their resale values impaired, which also helps salvage frequency.]
Still, looking past the cycles, we have to contend with the longer-term secular trend towards autonomous transportation. Where will pools of value accrue in what will surely be a dramatically transformed ecosystem of OEMs, mechanical parts suppliers, electronic component vendors, repair shops, dealers, etc., and what are the knock-on effects to parking lots, gas stations, urban planning, emergency room visits, and whatnot? I certainly don’t have the answers, but at least on the first question my best guess is that the hardware components enabling autonomy – sensors, LIDAR, cameras – will eventually be commodified and value will flow to those owning the mapping and autonomous driving software required to optimize routes and maneuver through surrounding environments. I’m less sure that excess profits accrue to those purely matching riders to cars because, notwithstanding the cross-side network effects between drivers and riders, the question of customer ownership is rendered fluid by the fact that the drivers (or, eventually, autonomous cars) are not captive and rider switching costs are low.
Cleanly delineating functions as I have done is itself fraught with error. Mapping and autonomous software benefit from machine learning empowered data network effects – more traversed miles translate into more granular maps and driving scenarios that better utilize and make smarter, all the vehicles in a service fleet – so, a dominant ride-sharing service or autonomous car manufacturer today might bootstrap its way to a competitive advantage founded on maps and software. I don’t know. But at the 20 to 40 year evolutionary endpoint, where fully autonomous vehicles can be readily beckoned in most major metros, there will surely be far fewer accidents than there are today (driver error accounts for 90% of all accidents). They may get so good, in fact, that a lot of the expensive safety features found in cars today are obviated away. Both developments carry existential undertones for salvage auctions, whose volumes are influenced by both the frequency of accidents and the severity of damage (as measured by repair costs per damaged vehicle).
Copart’s management put out a presentation several years ago that in “skate to where the puck is” fashion, cited some surveys and expert opinions purporting that consumers are resistant to self-driving cars and that the skills of autonomous vehicles do not rival those of human drivers. In the same presentation, management pointed out that there are ~260mn cars on the road and, assuming a ~17mn SAAR and an average car life of 15 years, it takes a really long time for new technology to meaningfully penetrate the car parc. But autonomous vehicles will make a significant impact on accident frequency well before they take over the car parc as every fully functional autonomous vehicle lowers the odds of collision with several other vehicles. How long it will take before we get to a point where this starts to matter is, of course, an open question. With respect to salvage auction volumes, we may be in a sweet spot today where technology is making cars more complicated and expensive to repair and safer for drivers who do find themselves in an accident, but is not yet good enough to materially reduce the frequency of accidents that is at least partially attributable to the modern day scourge of smartphone addiction…and, I can see this state of affairs persisting on the 5-10 year time frame that is relevant to me (and many of you).
As long as that’s the case, salvage volumes will likely continue plodding along at a high-single digit rate of growth – fueled by a low-single digit growth in claims, a growing proportion of which result in total losses due to rising repair costs – and value will continue migrating to the two largest players, Copart and IAA, who together share 80% of the US market. Both companies have built up scale advantages over 35 years that would be very difficult to replicate.
[IAA got into the salvage vehicle business in 1982, the same year as Copart. It went public in 1991, grew through a series of acquisitions, was acquired by private equity in 2005, and together with ADESA was merged into KAR auction services in 2007. IAA is now in the process of being spun-off of KAR.]
An insurer tries to minimize its claims costs by either minimizing repair costs on the cars it saves or by maximizing the salvage proceeds for those it deems a total loss. The latter is best achieved by creating competition among a large base of potential buyers, who in turn seek a reliable source of supply across a broad range of models…buyers attract sellers and sellers beget buyers in the feedback loop characteristic of marketplaces. Way back in the day, Copart’s salvage auctions were conducted locally, so if you wanted to bid, you’d have to drive to the physical auction site and position yourself among a throng of other buyers before a gavel-dropping, magaphoned auctioneer (“going once, going twice…”). This meant that buyers and sellers were sourced from a tight radius (~150 miles) and that the cross-side network effects were geographically confined. But the boundaries of the auction have expanded, thanks in large part to internet enabled bidding.
[Beginning in fy04 (ending July), Copart embarked on a journey to conduct all its auctions in online. These virtual auctions are split into two phases. In phase 1, which starts 3 days before the “real”, live auction, buyers submit the maximum price they are willing to pay for a vehicle and Copart’s system incrementally bids up to that price on behalf of the buyer, who receives an email if he is topped. The winning bid in phase 1 sets the starting price in phase 2, when bidders compete in a real time virtual auction environment.]
Online auctions have made it far more convenient for buyers from faraway locations to participate, regardless of weather conditions, resulting in more competing offers from buyers, higher realized prices for sellers, and indirectly, more revenue for Copart, whose transaction fees are partly tied to vehicle selling prices. Today, nearly half of Copart’s auctioned vehicle volumes are sold to buyers residing outside the state where the vehicle is being auctioned [30% to out-of-state buyers inside the US; 20% to international buyers]. To be fair, KAR, which runs both physical and online auctions, disputes the notion that running purely digital auctions delivers higher selling prices, contending that online auctions cannot replicate the action and excitement being there live, in person. Yet, half of IAA’s vehicles are sold to internet buyers. While this is well below the over 80% of salvaged vehicles that receive an internet bid – implying that the topping bid more often than not comes from someone on the ground – the realized selling price would likely have been lower without incremental demand from the online channel. And virtual bidding brings other benefits besides: it takes out the operating and capital expenses required to run live auctions (though these are somewhat offset by the incremental IT expenses required to run auctions online), allows buyers to bid in multiple auctions simultaneously, and speeds up auction sales since, as described in the brackets above, some of the price discovery occurs in the preliminary phase leading up to live action.
While bidding may be just as easily done online as in person, having a physical presence still matters a lot because of transport costs. Copart relies on third party haulers to move damaged vehicle from the towing company to one of its ~200 storage facilities (or “yards”) and to then take the car from the yard to the buyer. Too few sites will result in obstructive crowding at existing locations and higher transport costs, which get passed down as higher transaction costs to buyers and sellers, making the venue operator less competitive relative to someone with more locations. A dense yard network helps in cost effectively sourcing damaged vehicles…and the more vehicles you have running through your auction, the more bidders you will attract, and the better returns you will earn on yard investments(3). It also allows Copart to strike regional and national supply agreements with insurance companies.
These two instantiations of scale – the lower unit costs of operating a network of physical yards and the network effects of an auction – have reinforced one another for over 35 years, producing a system that supports 1mn vehicle searches/day and attracts 44mn bids/year to find homes (or graveyards, as the case may be) for over 1mn damaged vehicles. [Besides the fees that it takes from both the buyer and seller in each transaction, Copart also offers various ancillary services. One of these services is a salvage estimation tool called ProQuote that leverages the millions of data points it has gathered through its auctions – including make, model, year, severity of damage, season, scrap metal prices, and proximity to ports – to arrive at ever more accurate predictions of how much a damaged vehicle would get in an auction, allowing insurance companies to make wiser repair vs. salvage choices…so, data network effects are a third kind of relevant scale, albeit of much smaller importance].
Management has moreover complemented these scale advantages with operating discipline and thoughtful capital allocation. Rather than pulling an LKQ – levering its balance sheet and spraying capital at far flung markets – Copart has taken a more measured approach, investing around a quarter of its free cash flow into acquisitions over the last decade to infill existing markets and gain toeholds internationally, bolstering those new geographies with tuck-ins and organic development. Because Copart’s online auction format produces a globally distributed buyer base, when the company lands in a new market, it brings with it 750k registered members from around the globe, such that, as management put it, “when we open up in a market like the UK, we start selling motorcycles in the US.” This is different from a distributor like LKQ, whose scale economies are predominantly local.
Copart’s International EBITDA margins are still below those in the US (30% vs. 39%), but the key point is that margins have trekked steadily higher, from just 23% in FY14 (when management first broke International out as a separate segment) and will continue doing so as the company densifies these markets and scale economies kick in. Eventually, I don’t see why margins in Copart’s international markets – UK, UAE, Bahrain, Oman, Brazil, Spain, India, Ireland, and most promisingly, Germany – can’t rival those of the US. You could even make the case that the relative value proposition that Copart offers is stronger in continental Europe. The European market differs from the US in that insurers there reimburse policyholders only for the diminished value of the damaged car. It’s up to the policyholder to sell the total loss vehicle in order to recover the rest. With Copart’s model, the policyholder is reimbursed for the insured amount and can replace his damaged vehicle right away; the insurer takes on the responsibility of placing the car in auction, but, if recent auction results are any guide, realizes a higher return relative to existing remarketing methods; and the buyer, who often had to wait up to 3 weeks to know if his bid was accepted, now gets a steady, reliable flow of inventory. Everyone wins. It is then perhaps no wonder that the number of participants and unique bidders per auction are higher in Germany now than they are in North America, despite the former market’s relative immaturity.
And then there are the buybacks, which are lumpy and sporadic, as they should be. Included in Copart’s share repurchase history are two big Dutch tender offers – one in 2011, in which it retired ~14% of its share count and another in 2015, where it took out another ~11% – executed at prices that were a fraction of where they are today (~$10 in 2011 and ~$19 in 2015, split-adjusted vs. the current share price of $56), at earnings multiples significantly below current levels [management did not tender any shares in either of these auctions]. That the company hasn’t repurchased any of its shares in fy17 or year-to-date should perhaps tell you something about how appealing Copart stock is at 34x trailing earnings 🙂
A formidable scale-based competitive advantage supported by a responsible expansion plan has produced enviable returns on capital: EBITDA (no add-backs, no bullshit) has increased by $255mn over the last 7 years through fy17 on incremental gross capital of $845mn; pre-tax returns on gross capital have averaged around ~24% since fy11 with modest variation.
Finally, while stressing the importance of values and culture is eye-rollingly platitudinous, I can’t help but look back to 2005 and applaud CEO Jay Adair’s response (Jay was President of Copart at the time) to a question posed by a model-tweaking analyst who could think of nothing better than to wonder whether Hurricanes Katrina, a disaster that killed nearly 2,000 people and displaced 400,000, offered an opportunity to implement price hikes (4):
“I’m not going to have that discussion. It’s not Copart’s style. I wouldn’t do it. In a situation like this where an area has been so negatively impacted…So in this kind of situation, these major carriers are doing everything in the world they can to service their client base. I have to do everything I can to service them. I just don’t think it’s appropriate to even think of hitting them with trying to make a profit on this. I could not hit them with a price increase, I couldn’t do it.”
Does this answer give me the fuzzies? Naturally. I admit, I’m a sucker for this kind of stuff. But treating your customer well in hard times also happens to be sound long-term business (obviously).
Before departing, I thought it might be helpful to put Copart side-by-side with IAA, as the gaping valuation disparity between Copart and KAR Auction Services – 21x EV/EBITDA for Copart vs. 11x (EV+capitalized rent)/EBITDAR for KAR, which, unlike Copart, predominantly leases rather than owns its facilities – seems interesting given IAA’s impending spin.
Here is a high level view of Copart’s financials vs. IAA:
There are several important caveats. First, Copart and KAR have different fiscal year ends and I was too lazy to align their numbers. Doesn’t matter. The point is that Copart generates higher pre-tax returns than IAA on a similar revenue base [you can go back to FY16 when Copart generated the same amount of revenue as KAR does today and see that Copart’s returns were still higher]. Second, as I previously alluded to, IAA mostly leases its facilities while Copart mostly owns. KAR’s management claims that its rent expense comes to around 12% of revenue, which would account for all the margin differential, but can this be right? KAR’s total lease expense in 2017 was ~$164mn, which is indeed around 12%/13% of IAA’s revenue, but of course KAR is much bigger than IAA and includes a whole car auction segment, ADESA, which accounts for ~1/2 of KAR’s revenue and segment EBITDA. Surely some of KAR’s lease expense should be allocated to ADESA (or AFC, the company’s floorplan financing segment). But, let’s just say that rent is around 6% of IAA’s revenue. Even if we add those 6 points to IAA’s margins and generously make no reconciling adjustment to assets, IAA’s pre-tax ROA still falls short of Copart’s. Moreover, IAA’s margins don’t include any allocation of KAR’s corporate overhead, which is meaningful at around 15% of total segment-level EBITDA. Also, in the above table, I included all of KAR’s EBITDA add-backs except for stock comp and did not do likewise for Copart (in fact, refreshingly, Copart’s management doesn’t even disclose an Adjusted EBITDA figure). Finally, compared to IAA, Copart has a much higher mix of international business, which (for now) generates only ~15% ROA vs. 30%+ in the US, so comparing IAA to Copart USA only exacerbates the returns disparity.
So, there does appear to be some kind of structural advantage or operational discipline in play at Copart that is lacking at IAA. I thought maybe IAA’s cost structure included temporary cost bloat from acquired companies, but that doesn’t appear to be the case, as nearly all recent acquisitions and most of KAR’s goodwill is related to ADESA. We’ll get more clarity on what IAA looks like as an independent entity when the Form 10 comes out, but for now when I allocate corporate overhead according to revenue, I get around $300mn in EBITDA. This doesn’t deserve to trade at Copart’s 23x multiple (heck, that multiple isn’t exactly cheap for CPRT), but nor does KAR’s 10x multiple seem entirely fair.
(1) vs. 17mn new cars that enter the fleet every year.
(2) Another subset includes cars from municipalities and charities. These cars tend to be really shitty, low value ones that nonetheless take as much time and land to sell as a higher value insurance car, so management has been pulling back on this supply channel.
(3) As you might expect, a new yard may to some degree cannibalize an existing one (and present a headwind to same store sales growth), but the trade off is well worth it if you can fill up both yards and do so with lower unit towing costs. For this reason, growth in same store yard sales, which management stopped disclosing several years ago, can be a misleading measure of financial health.
(4) In natural disasters, Copart bears upfront the outsized costs transporting a huge number of damaged cars, many of them high value, to its facilities, which temporarily depresses margins until those cars are liquidated in auction months later, whereupon margins get a lift. Net-net, hurricanes help Copart’s profits.
Business update (2020)
Business updates,SAMPLE POSTS |
I recently did an interview with my friend @LibertyRPF. It will serve as a substitute for my 2020 year-end review since it contains everything I wanted to say (and more).
With permission from @LibertyRPF, I have reproduced the interview below. You can access the original here.
Interview with David Kim a.k.a. Scuttleblurb
𝕊𝕡𝕖𝕔𝕚𝕒𝕝 𝔼𝕕𝕚𝕥𝕚𝕠𝕟 #𝟙
Scuttleblurb is one of my favorite sources of investing/business information, and its creator, David Kim, is one of my favorite people I’ve met online (we haven’t met in person yet, but I hope to fix that someday… hurry up, Pfizer!).
As a writer, he’s who I want to be when I grow up.
He dives deep into companies and industries, but not in the typical way of many financial writers: He’s not trying to pitch you, he’s not starting with a conclusion in mind that he’s trying to justify by cherry picking info. He immerses himself in a business for a while and then reports what he finds about industry dynamics, management quality, unit economics, competitive advantages, how historical developments have made things the way they are today, etc.
He’s more ‘Magellan writing about his voyages’ than ‘salesman trying to get you to buy that shiny Dyson’…
His pieces often conclude on some shade of gray, without a clear call to action or price target, but I like that. It’s how the real m’f’kin’ world works.
Enough from me, let’s go to David (I’m in bold):
Hi David, thanks for doing this, I really appreciate it! I know you must have your hands full between the piles of transcripts and 10Ks and the new twins.
Thanks back. I very much enjoy your eclectic newsletter. It’s one of the first things I read in the morning. I also appreciate that we’re doing this interview in writing, which I much prefer to speaking.
I don’t want to assume that everybody reading this already knows you and reads your stuff, so could you start by telling the reader who you are, what’s your background and how you got to where you are now, and what’s your day job these days?
These days I write the scuttleblurb blog and manage a small fund. Prior to doing either, I was a research analyst at a L/S equity hedge fund.
The reason I started scuttleblurb is that I had just launched a fund with no prospective investors and my wife Maria and I needed a way to pay the bills. I write research notes as part of my investment process, and I put those notes online hoping people would pay to read them. Very few did.
So it was mid-2017 and we were stuck. scuttleblurb was going nowhere. My big idea for getting people to discover and then pay for my work was sending personalized emails and handwritten letters (really) to fund managers and analysts, with coupon codes and free trials and such. I spent a few thousand bucks advertising on marketfolly, which worked all right, but otherwise scuttleblurb got no traction. Maria and I just had our first baby and we were scraping by on her teaching salary plus income from a condo I was renting out as we weighed my non-existent career options. Then you somehow discovered my blog [Finding stuff is what I do! -Lib] and tweeted a link to one of my posts. Then so did @Bluegrasscap, @Intrinsicinv, and a few others.
I didn’t use Twitter at the time. I had a zombie account with like 40 followers that I barely touched. So you can imagine my surprise when the blog started to gain followers on this geeky community I had never heard of called FinTwit. I kept writing, people kept tweeting my posts, and here we are. I estimate that more than 70% of my subscribers have come from Twitter word-of-mouth. I’m so grateful for FinTwit. This community lifted me on its shoulders when things looked utterly hopeless. I honestly get a little choked up thinking about it.
What fascinates me most is your research process. You do such a good job, I’m curious what techniques I may be able to learn from you, though I suspect that there are no real tricks, just lots and lots of reading, asking questions and then tracking down answers, figuring out what is most important to focus on, etc… Can you describe what the research process is like for one of your long posts or series of posts?
Like every analyst, I read SEC filings and transcripts, Google, watch relevant YouTube videos, talk to management (usually IR), and do the same for competitors and anyone else of note in the ecosystem.
I write up every company that sparks my interest, whether I find the company an attractive investment opportunity at the time or not. I recall you mentioning that you research companies that speak to you regardless of how pricey they are and then keep those companies on a watchlist so that you’re ready to pounce when valuation approaches something reasonable. I’m the same way.
Speaking from personal experience, most investment funds don’t let their analysts do deep dives on interesting companies trading at uninteresting valuations. Try telling your PM that you’re going to spend a month studying such-and-such industry because you find it interesting but there’s probably nothing to buy at the moment and there may never be. See how that conversation goes. It’s nice to be able to set your own research agenda.
But in the time it takes for a sane valuation to arrive you may find yourself forgetting what it is you found so interesting about the stock in the first place, so it’s useful to have a write-up to refer back to. For the sake of efficiency, many analysts jot down bullet points and preserve a folder of notes, but there’s something about long-form prose. I can’t explain the underlying mechanics of how it happens, but oftentimes the very act of writing sparks an idea and exploring that idea in more detail leads me to an important point that I don’t think I would’ve grasped otherwise. This was true with Align Technology [Link for SB members. -Lib], which on the surface looked like a static collection of expiring IP selling a commodity product whose economics were just waiting to be siphoned away by low-end clear aligner competition. Writing led me down the track of thinking about Align as a system of reinforcing pieces – data from case starts enabling more complex malocclusion cases; vertically integrated software and hardware crowding out competing solutions and saving orthodontists chair time; a strong brand synonymous with clear aligner treatment – that in concert fueled more case starts, data, manufacturing process improvements, orthodontist adoption, etc. (there’s an argument to be made that Smile Direct Club is re-setting the basis of competition and presents the first major competitive threat to Align in over a decade, which I’ll likely touch on in a future post).
I don’t mean to sound prescriptive. I think many people find writing to be mentally taxing and maybe the time spent writing would be more efficiently put to use elsewhere. I just happen to enjoy writing, it’s like therapy for me, but I don’t want to promote an approach that may not make sense for most others.
I’m the same. Writing is very hard even if I enjoy it, but it’s also very fruitful because it’s hard. Writing is hard because thinking is hard, and on the page the gaps in logic and missing pieces of the puzzle can’t be hand-waved away as easily, while with what I’d do otherwise — the path of least resistance — my brain would probably skip over a lot more holes and leave some interesting doors unopened.
As far as what I focus on, I spend a lot of time trying to understand a company’s advantages or the advantages it might build up to. Earning a decent return on the growthy compounders I often write about at today’s valuations means you have to be right about the secular trend, the state of competition, and the ability to execute/adapt. Those are basically the 3 core elements. The secular trend is often the safe bet because it’s readily apparent to all that more compute is moving off-prem, that connected TV is taking share, that telehealth adoption is infiltrating healthcare, that semiconductor consumption is accelerating, and so forth, whereas the state of competition over the next 5-10 years can seem much more distant and abstract. The boundaries of competition can be fluid, especially in tech, and justifying current valuations often means assuming the company successfully trespasses into adjacent, already occupied territory. Maybe for companies with low enough market caps, you can lean on the “there’s plenty of TAM to go around and I just need to be directionally right” defense, but compounding at 20% over the next 7 years on, say, Snowflake, a $75bn company trading at 150x revenue (or wherever it’s at now), means being right on the specific state of the data management ecosystem and Snowflake’s ability to capture value within it.
The other thing I try to do is provide context, nuance, and caveats around different explanations in order to avoid overfitting concepts and mental models. LendingTree [Direct link for SB subs. -Lib] might look like a classic marketplace that scales through cross-side network effects between lenders and borrowers, but I think it’s more aptly described as a sophisticated marketing coop that arbitrages online ad inventory. Live Nation [Direct links for SB subs, #1 and #2. -Lib] is often described as a flywheel, but it looks more like a bundle that loss leads through a promotions biz. Sometimes network effects are relevant but they’re not really the moat they seem to be. Equity exchanges enjoy network effects in that buyers attract sellers and vice-versa, and the growing concentration of liquidity narrows bid/ask spreads, in turn drawing more buyers and sellers. But in the mid-2000s, as legacy equity exchanges lost regulatory protection, non-exchange trading platforms stole enormous share through superior technology and the backing of powerful brokers. Yelp has network effects in theory, but Google guards the front door and absorbs most of the value in local search. Symantec, Cisco, Palo Alto Networks [Direct link for SB subs. -Lib], and other cybersecurity vendors monitor trillions of telemetry points, but the resulting data network effects seem to be quickly arbitraged away, and that’s something to consider when evaluating the explanatory weight of network effects for the post-2010 breed of cloud native, zero-trust vendors.
By overapplying cherished concepts, you risk minimizing the importance of other factors or confirmation biasing your starting theory. An analyst begins to treat the diligence process the same way an attorney might, selectively gathering evidence to defend a starting hypothesis rather than considering the case from multiple angles to surface the best explanation. Maybe mismatches between changes in balance sheet items and the cash flow statement confirms your starting thesis that Company X is a fraud, and you so badly want this to be true that you don’t give due consideration to another plausible explanation….that under GAAP balance sheet items are translated from local currency to USD at period-end exchange rates while the cash flow statement is translated using monthly averages, which can create large discrepancies between the two during periods of volatile FX movements , discrepancies that are accounted for in shareholder’s equity.
And sometimes you dismiss the existence of moats in places where you don’t expect to find them. In Surfaces and Essences, Douglas Hofstadter talks about how a given item can belong in many unrelated categories depending on context. The image of a basketball rolling, for instance, is more readily accessible than that of a basketball floating because most of our observations and experiences related to basketballs have been on the ground. We think of airlines as intrinsically terrible businesses, loaded with high fixed costs, selling a commodity product, prone to price wars and periodic bouts of bankruptcy. But the same set of conditions may create advantages for disciplined airlines like Ryanair [Direct link for SB subs. -Lib] and Wizz Air [Direct link. -Lib], who at the expense of bloated competitors, leverage superior unit costs to maintain low ticket prices, generating more passenger volumes on a fixed cost base, fueling better landing fees with airports and volume discounts on aircraft, thereby reinforcing the cost advantage. This starts to look like a flywheel, a designation that we typically reserve for consumer internet and tech companies.
The set of concepts and mental models that you have at your disposal will evolve over time and with experience you get better at not only drawing on the right ones but doing so in a way that is unfettered by preconceptions, proportionate to the case at hand, and peppered with the appropriate caveats.
I can’t tell you how many bearish SaaS pitches I read 5-10 years ago that were predicated on accounting-based red flags and quarterly billings weakness, factors that were just absolutely swamped by massive secular tailwinds (this has pissed off some really smart low-multiple investors because the only thing worse than being wrong is seeing someone who you perceive as less sophisticated and experienced than you being right at your expense). The Bezos flywheel napkin sketch helped me understand the reflexive properties of scale economies, but then I found myself applying the sketch to areas where it probably didn’t belong, and in recent years I’ve chilled out a bit and drawn on the concept in a more nuanced way.
Proportionality is important to sound analysis. You don’t want to start seeing flywheels and fraud everywhere you look. You don’t want to diminish the tsunami of ad spend moving to connected TV because (gasp) Trade Desk’s [Direct link. -Lib] receivables outpaced revenue this one quarter; nor do you want your enthusiasm over connected TV blind you to the possibility that surplus in that domain may eventually concentrate inside walled gardens. You don’t want your commitment to low multiples steer you away from companies with compelling unit economics that are losing money as they invest in growth; nor do you want to stretch one assumption after the next to validate a purchase of a company you love (assuming, of course, that you’re optimizing for risk and reward when investing, which you may not be and that’s totally fine). By the way, congrats your investment in The Trade Desk. I sold 40% ago, oops. Why didn’t you tell me the stock would go from $500 to $800 in like a month….thought we were friends, geez. [Well, I sold ALGN a while ago and didn’t get the rebound on that one, so you win some and lost some ¯\_(ツ)_/¯ -Lib]
If it’s not an industry you already know, how do you first attack it and is it ever overwhelming? Have you ever given up entirely on something you wanted to write about because it’s just too complex?
It’s always overwhelming. My imposter syndrome is usually up to here because I cover such a wide range of companies and most of the industries I want to write about reside well outside my existing circle of competence at the time I want to write about them.
To be clear, I don’t mind being the dumbest guy in a room of specialists. I have no interest in being the sapient thought leader [I had to look up ‘sapient’, good vocabulary! -Lib] with big sweeping ideas of what software or media looks like in 10 years. I see myself more as the plucky interloper at a house party, probing one room, then the next, trying to make sense of what’s going on in each. But the feeling that I have no business charging people to read something I’ve written given my lack of background knowledge and expertise doesn’t ever go away. I just do my best to push through it and trust that approaching the work with intellectual honesty and an open mind will compensate for my considerable knowledge deficits and yield something that people want to read.
What’s your Big Picture vision for Scuttleblurb? What are you trying to give the reader, and who is your ideal reader? What kind of mindset and expectations should someone have going in?
The ideal reader is someone who derives intrinsic enjoyment from analyzing businesses, regardless of sector, even without the carrot of an actionable investment idea. If you’re looking for stock tips, please don’t subscribe [Subscribe anyway, and then get over that urge. You’ll be better off for it. -Lib]. You will feel ripped off and I will feel bad. I try to be very clear about this up front, in the “About Me” page and in the “Subscription” page, though I think some folks may still be frustrated or confused by the absence of price targets and buy/sell recommendations. My posts will often include a back-of-the-envelope valuation section, like “under such-and-such assumptions, here are the returns you can expect”, but the point of that is to offer a sense of what you need to believe to earn 10%, 15%, or whatever. You can make up whatever numbers you want. What I’m hoping to do is provide as honest a qualitative context as possible to inform your assumptions.
Some time back, a subscriber commented that I should focus on names in which I have a high degree of conviction. I think following this recommendation would yield a barren website because there are so few stocks that meet that criteria for me. Who knows, it might even tempt me to dishonestly convey conviction where it doesn’t exist for the sake of placating readers. But more importantly, it would be out of character. I don’t really have the emotional makeup to be a “high conviction, bet-the-farm” investor and I don’t really fall in love with the companies I analyze (which can be both an asset and a liability). I could act otherwise, but over time you’d see through the facade.
I think that we tend to remember the results we see and forget the mistakes we avoid. Convincing your PM not to buy a stock that stock goes to zero will not earn you the same glory (or bonus) as convincing your PM to buy a stock that doubles. The PM doesn’t have the same felt sense of profit and loss on the stock he avoids that goes to 0 as he does on the stock he owns that doubles. Similarly, a scuttleblurb writeup that prompts a position size reduction will not get the same credit as a newsletter “high conviction list” that prompts a new purchase, even if the losses prevented in the former and the gains realized in the latter are the same. The fact of the matter is that most people want black-and-white buy/sell recommendations. They want conviction. I offer neither. This can be frustrating.
There’s value to what I offer but it’s hard to quantify. I guess I would say don’t purchase a scuttleblurb subscription hoping to make money on actionable stock ideas. Do so because you want to get a little smarter about certain businesses than you are today and trust that over the course of your career as an investor, being a sharper business analyst will pay dividends.
Do you have a long-term vision for the site that is different from what it is today?
Almost a year ago, you wrote about how Scuttleblurb was doing as a business. I don’t know if you’re planning to do the same thing this year, and I don’t want to steal your thunder, but I’m curious to know how things have been going? Has the pandemic hurt or helped your business? Are you finding that the more you grow, the more growth rate accelerates because you’re getting more name recognition in the space and you have more subs spreading the word to potential subs?
I think I’m just going to link to this interview for my year-end review. It has everything I want to say. I don’t know if the pandemic has hurt or helped the blog. Things have been going pretty well. Scuttleblurb passed 1,000 paying subs this year and it looks like my gross bookings in 2020 will be about double what they were a year ago. Bookings have gone from $19k in 2017 to $49k in 2018 to $114k in 2019 to what’s looking like maybe $230k-$235k this year (vs. $223k LTM through Nov 8), so while the growth rate has decelerated, it’s nonetheless been so much stronger than my expectations. [Very happy for you, you deserve it! -Lib]
But while growth has been strong, my annual churn rate has spiked from ~11% in 2019 to what’s looking like ~high-teens this year. Part of this I think is just that as my audience has grows, incremental subscribers are not going to be as passionate about my work as the early adopters.
Another contributing factor is competition. There has been an explosion of Substack newsletters launched this year, many of which focus on tech and media analysis/strategy. It’s like all the cool kids are giving hot takes on Joe Rogan going to Spotify and Nvidia buying ARM or providing commentary on Stripe, Ant Financial, Snowflake, Shopify….and then here’s me off in the corner writing about a 100+ year-old company that sells bacteria to dairy processors [Direct link to Chr. Hansen & Novozymes post. -Lib]. Very off-trend. I write about SaaS, consumer internet, and other trendy stocks, sure, but not reliably so.
But the market for tech/media business analysis, while hot, is also crowded. When Stripe files to go public, your inbox will no doubt be flooded with Substack commentary. You do not need another newsletter breaking down that S-1. This is not an area where I can differentiate, nor do I care to. And there are so many talented writers with fast minds and fingers who can do the daily run much better than I ever will. I am a slow, plodding thinker. It really takes me a while to come up with something that I think is worth publishing. Sometimes I will sit on a write-up for over a year because it’s just not worth reading. I suck at extemporaneous thought and can’t think of clever things to say on the fly. I wish it weren’t like this.
Anyways, everyone’s now hip to the notion that customer obsession is the way to value creation, so what a real business publication would do upon seeing churn spike as mine has is find a way to write about what interests their readers most. But the truth, and this will hardly endear me to your audience, is that I spend no time thinking about what my subscribers want to read. I have always treated scuttleblurb as a personal research journal rather than a business, a way to pursue my own personal interests and get paid in the process. I’m quite sure that forcing myself to write about popular topics merely to attract subscribers would be the beginning of scuttleblurb’s end. And so my analysis and writing has got to be strong enough to compensate for incomplete overlap between my interests and those of my readers.
Sometimes the overlap is just too minimal. No amount of quality analysis on the consumer credit bureaus is going to satisfy subscribers who just want to read about enterprise SaaS. And someone looking for stock tips is not going to be satisfied with case studies. If subscribers churn off for reasons like those, so be it. I’m cool with that. If they’re cancelling because they’re looking for well-written, quality analysis and aren’t finding it on scuttleblurb, that’s a problem. I hope most of my churn is coming from the former, but I can’t really be sure.
Thanks so much for doing this, there’s so much gold and wisdom in there, it’s a masterclass masquerading as an interview. Take care my friend!