- scuttleblurb - https://www.scuttleblurb.com -

[EQIX – Equinix; INXN – Interxion] Network Effects in a Box

The “internet”, as the name implies, is a network of networks.  Scuttleblurb.com is sitting on server somewhere connected to an IP network different from the one your device is connected to and the fact that you are reading this means those two networks are communicating.  Likewise, if your internet service provider is Charter and you’d like to send an email to a friend whose ISP is Comcast, the networks of Charter and Comcast need a way to trade data traffic (“peer” with one another).  In the nascent days of the internet, different networks did this at Network Access Points established and operated by non-profits and the government.  Over time, large telecom carriers, who owned the core networks, took control of coordinating peering activity and small carriers that wished to exchange traffic with them were forced to house switching equipment on their premises.

Eventually, most peering agreements moved to “carrier-neutral” Internet Exchange Points (“IXs” or “IXPs”, data centers like those owned and/or operated by Equinix and Interxion) that were independent of any single carrier.  Today, global carrier neutral colocation/interconnection revenue of $15bn exceeds that of bandwidth provider colo by a factor of two as major telcos have seen their exchange businesses wither.  At first, service providers landed hooks at these neutral exchange points…and then came the content providers, financial institutions, and enterprises, in that order.  Customers at these neutral exchange points can connect to a single port and access hundreds of carriers and ISPs within a single data center or cluster of data centers.  Alternatively, a B2B enterprise that wants to sync to its partners and customers without enduring the congestion of public peering [on a “shared fabric” or “peering fabric”, where multiple parties interconnect their networks at a single port] can establish private “cross-connects”, or cables that directly tether its equipment to that of its customers within the same DC [in an intracampus cross connect, the DC operator connects multiple datacenters with fiber optic cables, giving customers access to customers located in other DC buildings].  

[To get a sense of how consequential network peering is to experiencing the web as we know it today, here’s an account of a de-peering incident, as told by Andrew Blum in his book Tubes: Behind the Scenes at the Internet:

“In one famous de-peering episode in 2008, Sprint stopped peering with Cogent for three days.  As a result, 3.3% of global Internet addresses ‘partitioned’, meaning they were cut off from the rest of the Internet…Any network that was ‘single-homed’ behind Sprint or Cogent – meaning they relied on the network exclusively to get to the rest of the Internet – was unable to reach any network that was ‘single-homed’ behind the other.  Among the better-known ‘captives’ behind Sprint were the US Department of Justice, the Commonwealth of Massachusetts, and Northrop Grumman; behind Cogent were NASA, ING Canada, and the New York court system.  Emails between the two camps couldn’t be delivered.  Their websites appeared to be unavailable, the connection unable to be established.”]

The benefits of colocation seem obvious enough.  Even the $2mn+ of capital you’re laying out upfront for a small 10k sf private build [1]is a pittance compared to the recurring expenses – taxes, staff, and maintenance, and especially power – of operating it.  You won’t get the latency benefits of cross-connecting with customers, you’ll pay costly networking tolls to local transit providers, and you’re probably not even going to be using all that built capacity most of the time anyhow. 

There’s this theory in urban economics called “economies of agglomeration”, which posits that firms in related industries achieve scale economies by clustering together in a confined region, as the concentration of related companies attracts deep, specialized pools of labor and suppliers that can be accessed more cost effectively when they are together in one place, and results in technology and knowledge spillovers.  For instance, the dense concentration of asset managers in Manhattan attracts newly minted MBAs looking for jobs, service providers scouring for clients, and management teams pitching debt and equity offerings.  Analysts at these shops can easily and informally get together and share ideas.  This set of knowledge and resources, in turn, compels asset managers to set up shop in Manhattan, reinforcing the feedback loop.

I think you see where I’m going with this.  The day-to-day interactions that a business used to have with its suppliers, partners, and customers in physical space – trading securities, coordinating product development, placing an order, paying the bills – have been increasingly mapped onto a virtual landscape over the last several decades.  Datacenters are the new cities.  Equinix’s critical competitive advantage, what separates it from being a commodity lessor of power and space, resides in the network effects spawned by connectivity among a dense and diverse tenant base within its 180+ data centers.  You might also cite the time and cost of permitting and constructing a datacenter as an entry barrier, and this might be a more valid one in Europe than in the US, but I think it’s largely besides the point.  The real moat comes from convincing carriers to plug into your datacenter and spinning up an ecosystem of connecting networks on top.

The roots of this moat extend all the back to the late ’90s, when major telecom carriers embedded their network backbones into datacenters owned by Interxion, Telx [acquired by Digital Realty in October 2015], and Equinix, creating the conditions for network effects to blossom over the ensuing decade+: a customer will choose the interconnection exchange on which it can peer with many other relevant customers, partners, and service providers; carriers and service providers, in virtuous fashion, will connect to the exchange that supports a critical mass of content providers and enterprises.  Furthermore, each incremental datacenter that Equinix or Interxion builds is both strengthened by and reinforces the existing web of connected participants in current datacenters on campus, creating what are known as “communities of interest” among related companies, like this (from Interxion):

[The bottom layer, the connectivity providers, used to comprise ~80% of INXN’s revenue in the early 2000s]

So, for instance, and I’m just making this up, inside an Interxion datacenter, Netflix can manage part of its content library and track user engagement by cross-connecting with AWS, and distribute that content with a high degree of reliability across Europe by syncing with any number of connectivity providers in the bottom layer.  In major European financial centers, where Interxion’s datacenter campuses host financial services constituents, a broker who requires no/low latency trade execution and data feeds can, at little or no cost, cross-connect with trading venues and providers of market data who are located on the same Interxion campuses.  Or consider all the parties involved in an electronic payments transaction, from processors to banks to application providers to wireless carriers, who must all trade traffic in real time.  These ecosystems of mutually reinforcing entities have been fostered over nearly 20 years and are difficult to replicate.  Customers rely on these datacenters for mission critical network access, making them very sticky, as is evidenced by Equnix’s MRR churn rate of ~2%-2.5%.

[Here’s a cool visual from Equinix’s Analyst Day that shows how dramatically its Chicago metro’s cross-connects have proliferated over the last 6 years, testifying to the network effects at play.  Note how thick the fibers on the “Network” cell wall are.  Connectivity providers are the key.]

The carrier-rich retail colocation datacenters that I refer to in this post differ from their more commodified “wholesale” cousins in that the latter cater to large enterprises that lease entire facilities, design and construct their architectures, and employ their own technical support staff.  Retail datacenters with internet exchanges, meanwhile, are occupied by smaller customers who lease by the cabinet, share pre-configured space with other customers, and rely on the DC’s staff for tech support.  But most critically, because wholesale customers primarily use DCs for space and power rather than connectivity, they do not benefit from the same network effects that underpin the connectivity-rich colo moat.  It is the aggregation function that gives rise to a fragmented customer base of enterprises, cloud and internet service providers, and system integrators (Equinix’s largest customer accounts for less than 3% of monthly recurring revenue) and allows the IX colo to persistently implement price hikes that at least keep up with inflation.

This is not the case for a wholesale DC provider, who relies on a few large enterprises that wield negotiating leverage over them.  DuPont Fabros’ largest customer is over 25% of revenue; it’s second largest accounts for another 20%.  A simple way to see the value differentiation between commodity wholesale and carrier-rich retail data center operators is to simply compare their returns on gross PP&E over time.

EBITDA / BoP Gross PPE
Avg
2011-2016
Wholesale
DuPont Fabros9.0%
Digital Realty10.1%
IX Retail
Equinix16.7%
Interxion15.1%

[There are also retail colos without internet exchange points that deliver more value for their customers than wholesale DCs but less compared to their IX retail brethren, and some DCs operate a hybrid wholesale/retail model as well.  It’s a spectrum]  

So you can see why wholesale DCs have been trying to break into the IX gambit organically, with little success, for years.  Digital Realty, which today gets ~15% of its revenue from interconnection, bought its way into the space through its acquisition of Telx in October 2015, followed up by its acquisition of a portfolio of DCs from Equinix in July 2016. The secular demand drivers are many…I’m talking about all the trends that have been tirelessly discussed for the last several years: enterprise cloud computing, edge computing, mobile data, internet of things, e-commerce, streaming video content, big data.  These phenomena are only moving in one direction.  We all know this and agree.

But it’s not just about the amount of data that is generated and consumed, but also about how.  The ever hastening pace and competitiveness of business demand that companies have access to applications on whatever device wherever they happen to be; the data generated from their consumption patterns and from the burgeoning thicket of IoT devices need to, in turn, be shot back to data center nodes for analysis and insight.  And the transfer of data to and from end users and devices to the datacenters needs to happen quickly and cheaply.  Today, the typical network topology looks something like this…

…a hub-and-spoke model where an application traverses great lengths from a core datacenter located somewhere in the sticks to reach end users and data is then backhauled from the end user back to the core.  This is expensive, bandwidth-taxing, slow, and because it is pushed over the public internet, sometimes in technical violation of strict security and privacy protocols.  You can imagine how much data a sensor-laden self-driving car generates every minute and how unacceptably long it would take and how expensive it would be to continuously transfer it all back to the core over a 4G network.  Instead, the IT network should be reconfigured to instead look like this…

…a widely-distributed footprint of nodes close to end user/device, each node hosting a rich ecosystem of networks and cloud partners that other networks and cloud partners care about, pushing and pulling bits to and from users more securely and with far less latency vs. the hub/spoke configuration.  Microsoft clearly shares the same vision.  Satya Nadella on MSFT’s fiscal 4q conference call:

“So to me that’s what we are building to. It’s actually a big architectural shift from thinking purely of this as a migration to some public cloud to really thinking of this as a real future distributed computing infrastructure and applications…from a forward looking perspective I want us to be very, very clear that we anticipate the edge to be actually one of the more exciting parts of what’s happening with our infrastructure.”

Something to consider is that while distributed computing appears to offer tailwind for the IX colos, it can have existential whiffs when pushed to the extreme.  Is it really the long-term secular trend that EQIX management unequivocally proclaims it to be?  Or is it just a processing pit stop for workloads that are inexorably inching their way further to the edge, to be directly manipulated by increasingly intelligent devices?

Consider that this past summer, Microsoft released Azure IoT Edge, a Windows/Linux solution that enables in-device AI and analytics using the same code running in the cloud.  To draw an example from Microsoft’s Build Developer conference, Sandvik Coromant, a Swedish manufacturer of cutting tools, already has machines on its factory floors that send telemetry to the Azure cloud, where machine learning is applied to the data to predict maintenance needs and trigger preemptive machine shutdowns when certain parameters are tripped.  But with Azure IoT Edge, that logic, and a whole menu of others that used to reside solely in the cloud, can now be ported directly to the machines themselves.  The process loop – sending telemetry from the device to the cloud, analyzing it, and shooting it back down to the device – is obviated, cutting the time to decommission a faulty machine from 2 seconds down to ~100 milliseconds.  While this seems like the cloud node is rendered inert, note that the algorithms are still developed and tested in the data center before being exported to and executed on the device…and even as in-device local AI becomes more sophisticated, the data deluge from burgeoning end nodes will still need to be synced to a centralized processing repository to more intensively train machine learning algorithms and generate predictive insights that are more expansive than can be derived locally.

But there is also the fear that as enterprises consider moving workloads off-premise, they bypass hybrid [public + private colocated or on-premise cloud services] and host mostly or entirely with a public hyperscale vendor (AWS, Azure, Google Cloud) [a colocated enterprise brings and maintains its own equipment to the datacenter, whereas a public cloud customer uses the equipment of the cloud provider] or that current hybrid enterprises migrate more and more workloads to the public cloud…or that public cloud vendors build out their own network nodes to host hybrid enterprises.  But by all accounts, Equinix is in deep, mutually beneficial partnership with Cloud & IT services customers (AWS, Google, Azure, Box, SaaS companies), who have been the most significant contributors to Equinix’s monthly recurring revenue (MRR) growth over the last several years.  The hyperscalers are relying on connectivity-rich colos like Equinix and Interxion to serve as their network nodes to meet latency needs on the edge.

There are 50 or so undersea cable initiatives in the world today that are being constructed to meet the proliferating amount of cross-border internet traffic, which has grown by 45x over the last decade.  These subsea projects are being funded not by telecom networks as in days past, but by the major public cloud vendors and Facebook, who are landing many of those cables directly on third party interconnection-rich colos that host their web services.

map2

[Source: TeleGeography]

Cloud & IT customers comprise half the Equinix’s top 10 customers by monthly recurring revenue (MRR), operate across all three of the company’s regions (America, EMEA, and APAC) in, on average, 40 of its datacenters [compared to 4 of the top 10 operating in fewer than 30 datacenters, on average, just a year ago].  The number of customers and deployments on Equinix’s Performance Hub, where enterprises can cross-connect to the public clouds and operate their private cloud in hybrid fashion, has grown by 2x-3x since 1q15, while 50%+ growth in cross-connects to cloud services has underpinned 20% and 14% recurring revenue CAGRs for Enteprise and Cloud customers, respectively, over the last 3 years.

Still another possible risk factor was trumpeted with great fanfare during CNBC’s Delivering Alpha conference last month by Social Capital’s Chamath Palihapitiya, who claimed that Google was developing a chip that could run half of its computing on 10% of the silicon, leading him to conclude that: “We can literally take a rack of servers that can basically replace seven or eight data centers and park it, drive it in an RV and park it beside a data center. Plug it into some air conditioning and power and it will take those data centers out of business.” 

While this sounds like your standard casually provocative and contrived sound-bite from yet another SV thought leader, it was taken seriously enough to spark a sell-off in data center stocks and put the management teams of those companies on defense, with Digital Realty’s head of IR remarking to Data Center Knowledge:

“Andy Power and I are in New York, meeting with our largest institutional investors, and this topic has come up as basically the first question every single meeting.”  

To state the obvious, when evaluating an existential claim that is predicated upon extrapolating a current trend, it’s often worth asking whether there is evidence of said trend’s impact today.  For instance, the assertion that “intensifying e-commerce adoption will drive huge swaths of malls into extinction”, while bold, is at least hinted at by moribund foot traffic at malls and negative comps at mall-based specialty retailers over the last several years.  Similarly, if it is indeed true that greater chip processing efficiency will dramatically reduce data center tenancy, it seems we should already be seeing this in the data, as Moore’s law has reliably held since it was first articulated in the 1970s, and server chips are far denser and more powerful today than they were 5-10 years ago.  And yet, we see just the opposite.

Facebook, Microsoft, Alphabet, and Amazon are all accelerating their investments in datacenters in the coming years – opening new ones, expanding existing ones – and entering into long-term lease agreements with both wholesale and connectivity colo datacenter operators.  Even as colocation operators have poured substantial sums into growth capex, utilization rates have trekked higher.  Unit sales of Intel’s datacenter chips have increased by high-single digits per year over the last several years, suggesting that the neural networking chips that CP referred to are working alongside CPU servers, not replacing them.

It seems a core assumption to CP’s argument is that the amount of data generated and consumed is invariant to efficiency gains in computing.  But cases to the contrary – where efficiency gains, in reducing the cost of consumption, have actually spurred more consumption and nullified the energy savings – are prevalent enough in the history of technological progress that they go by a name, “Jevons paradox”, described in this The New Yorker article from December 2010 [2]:

In a paper published in 1998, the Yale economist William D. Nordhaus estimated the cost of lighting throughout human history.  An ancient Babylonian, he calculated, needed to work more than forty-one hours to acquire enough lamp oil to provide a thousand lumen-hours of light—the equivalent of a seventy-five-watt incandescent bulb burning for about an hour. Thirty-five hundred years later, a contemporary of Thomas Jefferson’s could buy the same amount of illumination, in the form of tallow candles, by working for about five hours and twenty minutes. By 1992, an average American, with access to compact fluorescents, could do the same in less than half a second. Increasing the energy efficiency of illumination is nothing new; improved lighting has been “a lunch you’re paid to eat” ever since humans upgraded from cave fires (fifty-eight hours of labor for our early Stone Age ancestors). Yet our efficiency gains haven’t reduced the energy we expend on illumination or shrunk our energy consumption over all. On the contrary, we now generate light so extravagantly that darkness itself is spoken of as an endangered natural resource.

Modern air-conditioners, like modern refrigerators, are vastly more energy efficient than their mid-twentieth-century predecessors—in both cases, partly because of tighter standards established by the Department of Energy. But that efficiency has driven down their cost of operation, and manufacturing efficiencies and market growth have driven down the cost of production, to such an extent that the ownership percentage of 1960 has now flipped: by 2005, according to the Energy Information Administration, eighty-four per cent of all U.S. homes had air-conditioning, and most of it was central. Stan Cox, who is the author of the recent book “Losing Our Cool,” told me that, between 1993 and 2005, “the energy efficiency of residential air-conditioning equipment improved twenty-eight per cent, but energy consumption for A.C. by the average air-conditioned household rose thirty-seven per cent.”

And the “paradox” certainly seems apparent in the case of server capacity and processing speed, where advances have continuously accommodated ever growing use cases that have sparked growth in overall power consumption.  It’s true that GPUs are far more energy efficient to run than CPUs on a per instruction basis, but these chips are enabling far more incremental workloads than were possible before, not simply usurping a fixed quantum of work that was previously being handled by CPUs.

With all this talk around chip speed, it’s easy to forget that the core value proposition offered by connectivity-rich colos like EQIX and INXN is not processing power but rather seamless connectivity to a variety of relevant networks, service providers, customers, and partners in a securely monitored facility with unimpeachable reliability.  When you walk into an Equinix datacenter, you don’t see infinity rooms of servers training machine learning algorithms and hosting streaming sites, but rather cabinets housing huge pieces of switching equipment syncing different networks, and overhead cable trays secured to the ceiling, shielding thousands of different cross-connects.

The importance of connectivity means that the number of connectivity-rich datacenters will trend towards but never converge to a number that optimizes for scale economies alone.  A distributed topology with multiple datacenter per region, as discussed in this post and outlined in this article [3], addresses several problems, including the huge left tail consequences of a single point of failure, the exorbitant cost of interconnect in regions with inefficient last-mile networks, latency, and jurisdictional mandates, especially in Europe, that require local data to remain within geographic borders.  Faster chips do not solve any of these problems.

Incremental returns

An IX data center operator leases property for 10+ years and enters into 3-5 year contracts embedded with 2%-5% price escalators with customers who pay monthly fees for rent, power, and interconnection fees that comprise ~95% of total revenue.  A typical new build can get to be ~80% utilized within 2-5 years and cash flow breakeven inside of 12 months.  During the first two years or so after a datacenter opens, the vast majority of recurring revenue comes from rent.  But as the datacenter fills up with customers and those customers drag more and more of their workloads to the colo and connect with other customers within the same datacenter and across datacenters on the same campus, power and cross-connects represent an ever growing mix of revenue such that in 4-5 years time, they come to comprise the majority of revenue per colo and user.

The cash costs at an Equinix datacenter break down like this:

% of cash operating costs at the datacenter:

Utilities: 35%

Labor: 19%

Rent: 15%

Repairs/Maintenance: 8%

Other: 23%

So, roughly half of the costs – labor, rent, repairs, ~half of “other” – are fixed.

If you include the cash operating costs below the gross profit line [cost of revenue basically represents costs at the datacenter level: rental payments, electricity and bandwidth costs, IBX data center employee salaries (including stock comp), repairs, maintenance, security services.], the consolidated cost structure breaks down like this:

% of cash operating costs of EQIX / % of revenue / mostly fixed or variable in the short-term?

Labor: 40% / 23% / fixed (including stock-based comp)

Power: 20% / 11% / variable

Consumables & other: 19% / 10% / semi-fixed

Rent: 8% / 5% / fixed

Outside services: 7% / 4% / semi-fixed

Maintenance: 6% / 3% / fixed

With ~2/3 of EQIX’s cost structure practically fixed, there’s meaningful operating leverage as datacenters fill up and bustle with activity.  Among Equinix’s 150 IBX datacenters (that is, datacenters with ecosystems of businesses, networks, and service providers), 99 are “stabilized” assets that began operating before 1/1/2016 and are 83% leased up.  There is $5.7bn in gross PP&E tied up in those datacenters which are generating $1.6bn in cash profit after datacenter level stock comp and maintenance capex (~4% of revenue), translating into a 28% pre-tax unlevered return on capital.

Equinix is by far the largest player in an increasingly consolidated industry.  It got that way through a fairly even combination of growth capex and M&A.  The commercial logic to mergers in this space comes not just from cross-selling IX space across a non-overlapping customer base and taking out redundant SG&A, but also in fusing the ecosystems of datacenters located within the same campus or metro, further reinforcing network effects.  For instance, through its acquisition Telecity, Equinix got a bunch of datacenters that were adjacent to its own within Paris, London, Amsterdam, and Frankfurt.  By linking communities across datacenters within the same metros, Equinix is driving greater utilization across the metro as a whole.

While Equinix’s 14% share of the global retail colo + IX market is greater than 2x its next closest peer, if you isolate interconnection colo (the good stuff), the company’s global share is more like 60%-70%.  Furthermore, according to management, half of the next six largest players in below chart are looking to divest their colocation assets, and of the remaining three, two serve a single region and one is mostly a wholesale.

Equinix points to its global footprint as a key competitive advantage, but it’s important to qualify this claim, as too many companies casually and erroneously point to their “global” presence as a moat.  By being spread across multiple continents, you can leverage overhead cost somewhat, offer multi-region bundled pricing to customers, and point to your bigness and brand during the sales process.  Equinix claims that around 85% of its customers reside in multiple metros and 58% in all three regions (Americas, EMEA, APAC)…but a lot of these multi-region relationships were simply manufactured through acquisition and in any case, the presence of one customer in multiple datacenters doesn’t really answer the question that really matters, which is this: does having a connectivity-rich colo in, say, New York City make it more likely that a customer will choose your colo in, say, Paris (and vice-versa) over a peer who is regionally better positioned and has a superior ecosystem?  I don’t see why it would.  I’m not saying that a global presence is irrelevant, just that housing the customer in one region does not make him inherently captive to you in another.  A customer’s choice of datacenter will primarily be dictated by regional location, connectivity, ecosystem density, and of course, reliability and security.

Which is why I wouldn’t be so quick to conclude that Equinix, by virtue of its global girth, wields an inherent advantage over Interxion, another fine connecity-rich that gets all its revenue from Europe.  Over the years, INXN has been a popular “play” among eventy types hoping for either a multiple re-rating on a potential REIT conversion or thinking that, as a $3.6bn market cap peon next to an acquisitive $36bn EQIX, the company could get bought.  But the company has its fundamental, standalone charms too.

The European colos appear to have learned their lesson from being burned by overexpansion in the early 2000s, and have been careful to let demand drive high-single digit supply growth over the last decade.  As tirelessly expounded in this post, replicating a carrier rich colo from scratch is a near insuperable feat, attesting to why there have been no new significant organic entrants in the pan-European IX data center market for the last 15 years and why customers are incredibly sticky even in the face of persistent price hikes.  European colos are also riding the same secular tailwinds propelling the US market – low latency and high connectivity requirements by B2B cloud and content platforms – though with a ~1-2 year lag.

The combination of favorable supply/demand balance, strong barriers to entry, and a secularly growing demand drivers =

The near entirety of INXN’s growth has been organic too.

Compared to Equinix, Interxion earns somewhat lower returns on gross capital on mature data centers, low-20s vs. ~30%.  I suspect that part of this could be due to the fact that Interxion does not directly benefit from high margin interconnection revenues to the same degree as Equinix.  Interconnect only constitutes 8% of EQIX’s recurring revenue in EMEA vs. nearly 25% in the US.  And cross-connecting in Europe has historically been free or available for a one time fee collected by the colo (although this service is transitioning towards a recurring monthly payment model, which is the status quo in the US).

[INXN has invested over €1bn in infrastructure, land, and equipment to build out the 34 fully data centers it operated at the start of 2016.  Today, with 82% of 900k+ square feet utilized, these data centers generate nearly ~€370mn in revenue and ~€240mn in discretionary cash flow [gross profit less maintenance capex] to the company, a 23% annual pre-tax cash return on investment [up from mid-teens 4 years ago] that will improve further as recurring revenue accretes by high-single digits annually on price increases, capacity utilization, cross-connects, and power consumption.]

But in any case, the returns on incremental datacenter investment are certainly lofty enough to want to avoid the dividend drain that would attend REIT conversion.  Why convert when you can take all your operating cash flow, add a dollop of leverage, and invest it all in projects earning 20%+ returns at scale?  As management recently put it:

“…the idea of sort of being more tactical and as you described sort of let – taking some of that capital and paying a little bit of dividend, to me, that doesn’t smack of actually securing long-term, sustainable shareholder returns.” 

Equinix, on the other hand, must at a minimum pay out ~half its AFFO in dividends, constraining the company’s organic capacity to reinvest, forcing it to persistently issuing debt and stock to fund growth capex and M&A.  Not that EQIX’s operating model – reinvesting half its AFFO, responsibly levering up, earning ~30% incremental returns, and delevering over time – has shareholders hurting.

[AFFO = Adj. EBITDA – stock comp – MCX – taxes – interest]

And there’s still a pretty long runway ahead, for both companies.  Today’s retail colocation and interconnection TAM is around $23bn, split between carrier neutral colos at ~$15bn and bandwidth providers at ~$8bn, the latter growing by ~2%, the former by ~8%.  Equinix’s prediction is that the 8% growth will be juiced a few points by enterprises increasingly adopting hybrid clouds, so call it 10% organic revenue growth, which would be slower than either company has registered the last 5 years.  Layer in the operating leverage and we’re probably talking about low/mid-teens maintenance free cash flow growth.

At 28x AFFO/mFCFE, EQIX and INXN are not statistically cheap stocks.  But it’s no so easy to find companies protected by formidable moats with credible opportunities to reinvest capital at 20%-30% returns for many years.  By comparison, a deep-moater like VRSK [4] is trading at over 30x free cash flow, growing top-line by mid/high single digits, and reinvesting nearly all its prodigious incremental cash flow in share buybacks and gems like Wood Mac and Argus that are unlikely to earn anywhere near those returns.

Notes

INXN claims to be the largest pan-European player in the market, which is technically true but also a bit misleading because in the big 4 European markets (France, Germany, Netherlands, and the UK) that constitute 65% of its business, by my estimate, Interxion still generates less than 1/3 the revenue of Equinix.  Even before the Telecity acquisition in January 2016, EQIX generated more EMEA revenue than Interxion, but now it has more datacenters and across more countries in the region too [well, depending on how you define “region” as the set of countries covered in Equinix’s EMEA is more expansive than that covered by Interxion].

Podcast Blurbs [Larry Summers, Levered restaurant franchisees, Books vs. podcasts, Cramer on MongoDB]

Posted By scuttleblurb On In Podcast Blurbs | Comments Disabled

Freakonomics Radio (9/28/17; Why Larry Summers is the Economist Everyone Hates to Love)

Larry Summers:

(On infrastructure)

“I think we’ve completely mismanaged infrastructure investment in the United States.  It’s nuts that when interest rates are lower than they’ve been at any time in the last 50 years that we’re also investing less net of depreciation…it’s nuts that we have a regulatory apparatus that means that it took far longer to repair a single exit of the Oakland Bay Bridge than it did to build the entire Oakland Bay Bridge two generations ago.  There’s a small bridge across the Charles River that I’m looking at outside my office.  It’s about 300 feet long.  It was under repair with a lane of traffic closed for 5 years.  Julius Caesar built a bridge over the span of the Rhine that was 9x as long in 9 days.  So, both on the quantity of expenditure and on the efficiency of the expenditure and the streamlining of the effort, there’s plenty of room for improvement.”

(On tax repatriation)

“We have $2.5 sitting abroad.  Indulge me if you will in an analogy.  Suppose you ran a library.  Suppose you had a lot of overdue books from your library.  You might decide to give the library amnesty so that people would bring the books home.  You might decide to say that there will never be an amnesty and people better bring the books back because otherwise the fines are going to mount.  But only an idiot would put a sign on the library door saying ‘No amnesty now, thinking about one next month.’  And yet, what have we done as a country?  We’ve said to all those businesses with $2.5tn abroad that if you bring it home right now, you’ll have to pay 35% tax, but we’re talking about and thinking about and planning maybe we’ll have some kind of tax reform where that will come down.”

(On cost disease)

“…the Consumer Price Index for all products are set to be 100 in 1983.  Well, if you look at the CPI for television sets, it’s now about 6.  If you look at the CPI for a day in the hospital room or a year in a college, it’s about 600.  In other words, since 1983, the relative price of this relative measure of education and healthcare as opposed to the TV set has changed by a factor of 100.  That’s got a number of consequences.  One is, since government is more involved in buying education and healthcare than it is in buying TVs, there’s going to be upward pressure on the size of government relative to the rest of the economy.  Another is because there’s been far greater productivity growth in the production of TV sets than in the production of government goods, a larger fraction of the workforce is going to find itself working in the areas where there’s less productivity growth, like education and healthcare…if workers become much more productive doing some things and their wage has to be the same in all sectors, then there’s going to be a tendency for the price in the areas in which labor is not becoming productive, to rise.  And that’s why it costs more to go to the theater relative to when I was a child, that’s why tuition in colleges has risen, that’s why the cost of mental health counseling has risen.”

Grant’s Podcast (10/11/17; Hamburger Helper)

John Hamburger (President of the Restaurant Finance Monitor):

“I’ve been following franchisees since the early ’80s, I actually was a franchisee back in the early ’80s.  Restaurants used to get financed one by one, unit by unit, and you’d buy the land, you’d buy the building, you’d equip it, and you had to finance each of those transactions so you’d have a real estate loan and you’d have an equipment loan.  And it took a long time and it was expensive and you thought really carefully about building restaurants.  Today, the way the deal works is that there is a huge net lease market out there that likes to own restaurant real estate.  So, franchisees over the last 5 to 10 years have decided that they can build more stores, own more stores, buy more stores if they don’t have their capital tied up in real estate.  So franchisees, just like the franchisors, have adopted this quasi asset light model where the real estate in a lot of these franchise restaurant locations are owned by an investor.  And that’s okay, it’s just that when you own real estate, you own it and can borrow against it.  When you lease real estate, the rents go up every year and so over time, it cuts down on the operating margins of the franchisees.

What does it all mean?  It’s allowed franchisees to get bigger.  So you look at Yum Brands or Burger King, they’re relying on fewer and fewer franchisees to operate their system and where risk might possibly come in is that you’ve got fewer and fewer franchisees operating these restaurant around these countries.  I’ve heard Restaurant Brands tell Burger King they’d like to have as few as 50 franchisees in the country.  Well, you take 50 franchisees, you lever them up, it’s not like it was 10-20 years ago where in Burger King you had 600 franchisees and the majority of them owned their own real estate.  It’s a very levered system, the way it’s currently configured. […]

What’s interesting about Carrols (Burger King franchisee), it’s a typical franchisee in the restaurant space, it’s a larger franchisee, it’s ranked around #4 in the country in terms of size, they’re doing a lot of what the large franchisees have done over the last 5 years.  They’ve been able to borrow, they’ve been able to acquire other franchisees and build up their base of restaurants…but other franchisees have grown as well by buying up other franchise restaurants…how do they do that?  They borrow and then they sell their real estate and lease it back, that’s how all of this stuff gets financed and it adds a lot of leverage to the franchisees.

The largest owner of [this real estate] are a number of REITs focused on the restaurant business, there’s Realty Income, there’s Spirit Realty.  There’s also individuals, real estate developers, trusts, and one of the reasons they like owning this kind of real estate generally these leases are triple net, which means taxes, maintenance, and the insurance of those properties are taken care of by the tenant, so the rental income becomes an annuity to the owner, and this is a huge investment area in the US.

If I go back to the ’80s and ’90s, capital in the restaurant business came from the public markets.  We had a run of IPOs in the 1980s and 1990s, that’s where most of the growth capital came from.  Today, you have private equity funds instead of public investors driving the restaurant business.  The public markets have changed where smaller companies…have been unable to go public.  I think the last IPO in the restaurant business was 2015.”

Waking Up (10/6/17; The “After On” Interview)

Sam Harris:

“The numbers [of people who listen to the Waking Up podcast] are really surprising and don’t argue for the health of books, frankly.  A very successful book in hard cover, you are generally very happy to sell 100,000 books in hard cover over the course of the first year before it goes to paperback.  That is very likely going to hit the best-seller list, maybe if you’re a diet book you need to sell more than that, but if you sold 10,000 your first week, you almost certainly have a best seller.  And in the best case, you could sell 200,000 or 300,000 books in hardcover, and that’s a newsworthy achievement.  And there’s the 1/100th of 1% that sell millions of copies.  So, with a book I could reasonably expect to reach 100,000 people in a year and maybe some hundreds of thousands over the course of a decade.  So, all my books together now have sold, I’m pretty sure I haven’t reached 2 million people with those books.  Somewhere between 1 million and 2 million.

But with my podcast, I reach that many people in a day.  And these are long form interviews and sometimes standalone, just me talking about what I think is important to talk about for an hour or two, but often I’m speaking with a very smart guest and we can go very deep on any topic we care about.  And this is not like going on CNN and speaking for 6 minutes in attempted sound bites and you’re gone.  People are really listening in depth. […]

It’s a big commitment to write a book.  Once it’s written, you hand it in to your publisher and it takes 11 months for them to publish it.  Increasingly, that makes less and less sense.  Both the time it takes to do it and the time it takes to publish it don’t compare favorably with podcasting.  In defense of writing, there are certain things that are best done in written form.  Nothing I said has really application to [novels], reading novels is still an experience you want to have, but what I’m doing in non-fiction, that’s primarily argument driven, there are other formats through which to get the argument out.  I still plan to write books because I still love to read books and taking the time to really say something as well as you can effects everything else you do, it effects the stuff you can say extemporaneously in a conversation such as this as well.  So, I still value the process of writing and taking the time to think carefully about things.”

Mad Money w/ Jim Cramer (10/25/17)

Jim Cramer:

“At the heart of every software application there’s a database.  You need to have a system to organize, store, and process your files or none of this stuff works.  So, for software developers, a lot of thought goes into picking the right database and with the rise mobile, social, cloud, datacenter, and IoT, a lot hinges on getting that choice right.  For the longest time, there were just two types.  You had relational databases that are basically unchanged since the 1970s, so developers need to spend a lot of time making sure modern software interacts properly with these rigid database structures from decades ago.  They simply were not designed for the demands of modern software and they certainly weren’t designed for cloud.

Since the turn of the century, though, we’ve seen the rise of non-relational databases which tried to address these shortcomings.  But the problem is that so much runs on old school relational databases, these new non-relational ones are only worth using in a small number of cases.  Then you have non-relational databases that have become more popular recently and are widely used for big data and online applications…basically, they’re more flexible than the traditional model.

On the other hand, MongoDB does something different.  The guys who created this company got frustrated by the available database options on the market, so they built their own platforms designed for developers by developers.  The company has its own offering that they believe combines the best of both relational and non-relational databases. They use a document-based architecture that’s more flexible, easier to scale, more reliable, giving developers the ability to manage their data in a more natural way so they can more rapidly build, deploy, and maintain the software they’re working on.  It works in any environment – the cloud, on premise, or even as some kind of hybrid which has become so popular – but more important, you can use it for a broad range of applications.

To get the word out, MongoDB offers a free, stripped down version of their database platform.  You can just download it right off the website. That makes it easier for software developers to play around with the thing, if they decide they want advanced features, they can pay for any upgrade…The free version has been downloaded more than 10mn times just in the last year.  Then they sign you up for a subscription and you start paying for the enterprise version or the cloud-based version.  So far, MongoDB has more than 4,300 customers in 85 countries including some really well-known names, they’ve got Barclays, ADP, Morgan Stanley, Astrazenca, Genentech, and bunch of government agencies […]

How’s it work?  Let’s use the example of Barclays.  Like most financials, Barclays has invested a ton of money going digital in recent years.  But the rigid nature of old school database technology made it really hard for them to add new online features that customers could use.  And as more customers embrace mobile banking, the cost of running the mainframe kept rising, so Barclays brought in MongoDB in 2012 and it gave them significant performance improvements and a resilient system and major cost savings and the company’s in-house software developers have a much easier time developing features for digital banking.

How about the numbers?  Well, MongoDB’s revenue growth has slowed a tad so far in 2017 but it’s still very rapid, up 51% y/y and that’s not much a deceleration from 55% in 2016.  The customer base is growing like a weed.  At the end of July, they had 4,300 customers, up from 3,200 in January and 1,700 at the beginning of 2016.  And a total of 1,900 of these users come from MongoDB’s cloud offering even though it only came out in the summer of last year.  The company has 71% gross margins, which have improved steadily over the last couple of years.  But like many newly minted tech IPOs, MongoDB is not yet profitable…the key here is annual recurring revenue (ARR) because remember they use a SaaS business model and then the contribution margin…in 2015, customers who signed up that year generated $11.5mn in ARR but they racked up $24.3mn in associated costs, meaning the contribution margin was negative.  By 2016 though, that same group of customers who signed up in 2015 generated $12.8mn in RR but costs only came in at $5.2mn, makes sense these guys had already been signed up, giving MDB a 59% contribution margin.  2017, it’s risen to 60%.  Basically, as time goes on, customers become more and more lucrative because the real expense is signing them up.

Now there’s really just one thing that concerns me other than lack of profitability.  Mongo competes with some real titans of the industry, I’m talking IBM, Microsoft, Oracle, AWS, Google Cloud, Azure for more modern databases.

[MELI – MercadoLibre] Digging The Moat (Or Is It A Grave?), Part 1

Posted By scuttleblurb On In [MELI] MercadoLibre | Comments Disabled
A few weeks ago, I came across the following remark from Joe Weisenthal in my Tweet feed: “Smart people can’t help but try to poke holes in things, and see the weaknesses in existing systems.  But long-term investing requires a certain amount of dumb confidence that things will all work out over time, even if [...]
To access this post, you must purchase Annual subscription [5] or Quarterly subscription [6].

Podcast Blurbs [Freewill, Jerks, Cramer on Switch]

Posted By scuttleblurb On In Podcast Blurbs | Comments Disabled

Planet Money, Ep. 795 (Is Record Breaking Broken)

“Companies started coming to [The Guinness Book of World Records] asking if we could fly someone out to their event.  Companies and people were coming to Guinness wanting to break a record for publicity or to draw attention to themselves.  They’ve gotten on to the fact that if they achieve a record title, they get a lot of press for it or shares on social media.  Increasingly, they want to make it more of a marketing event, they want to invite one of our adjudicators and the usage of the logo.  Guinness realized that they could charge for this, a lot.  The price tag for a full service event starts at $12,000 and goes up over half a million…they do hundreds of events like this every year. […]

Guinness’ main customers used to be kids who were just fascinated by the people who were breaking the records and pushing the limits of what it meant to be a human.  Guinness’ customers now are the record breakers themselves.  If you pay Guinness enough money, they will help you figure out a record you can break and they will help you break it….Guinness is now in the business of selling record-breaking events.”

Mad Money w/ Jim Cramer (10/6/17)

Jim Cramer on Switch:

“As technology gets better and better we’ve seen a ridiculous explosion in the amount of digital information out there.  Companies that want to know what’s going on need to keep track of tons of data.  By some estimates, there are going to be 200bn smart devices connected to the web by 2020.  Even now, the average person generates 3GB of data per day.  We use the internet for everything and increasingly store data in the cloud, which means some datacenter somewhere is where it goes.

Your typical datacenter is just a gigantic building, it’s packed with networks, servers, and air conditioning equipment…but Switch is different.  According to them, the kind of datacenter you need to run a low cost consumer service like Apple Music is just not the same as the kind of datacenters that work best for business critical data storage or highly complex work loads with highly sensitive and regulated information.  Switch has actually patented their datacenter design, the layout and cooling systems allow customer to run more power through their machines.  On top of that, Switch’s platform makes it easy for clients to rapidly deploy or replace technology infrastructure.

Switch’s revenue grew at a 17% clip in 1h17 and while that’s a bit of a deceleration from last year’s growth rate of 20%, it’s still pretty darn good.  These numbers can jump around whenever Switch opens new datacenters.  The key here is that 90% of their revenue is recurring.  Switch’s gross margin came in at a rock solid 48.2% in the first half…Switch is actually, yes, profitable here.  And while its operating margin jumps around a lot because of one-time fees, the fact is that it’s a pretty good business.

However…there are some issues we need to address.  For starters, it’s got a complicated corporate structure.  Before the IPO, Switch operated as a partnership. […] The partnership has now established a standard C-Corp including a holding company, Switch Inc., which is the stock that is now trading.  Switch Ltd, the partnership, is now the sole asset of Switch Inc., which is a little troubling…The founder, Rob Roy, now has 67.7% of the voting power in Switch Inc.  Public shareholders have less than 5%.  It’s never a good thing when you have a multi-tiered ownership structure with the owners of the common stock being treated as second or third class citizens.  It’s rarely a good thing when the people who own the asset are different from the people who control it…For example, because of Switch’s transformation, the insiders who used to own the whole thing are racking up some major tax bills, so Switch is helping to pay those taxes for them.

What else is problematic?  The vast bulk of Switch’s sales, more than 95%, come from a single datacenter campus in Las Vegas…on top of that, Switch gets 38% of its sales from just 10 companies.  Ebay alone accounts for nearly 10%.  But the datacenter business tends to be pretty sticky.  I’m not that concerned.

In 1h17, Switch generated $35.3mn in net income.  To be conservative, let’s just double that for the fully 2017.  So Switch is likely to make $70.6mn.  But at $70.6mn, Switch will be growing at a 20.9% clip vs. last year’s numbers when you subtract one-timers…let’s assume that growth rate decelerates to 15% next year.  With $81.1mn in net income divided by the share count and I’m estimating that Switch could earn 28c per share this year, 32c next year…that means it’s selling for 76x earnings and 66x next year’s numbers.  Obviously that’s expensive.  However, I think I’m low-balling…next year, Switch will start selling datacenter space at its new center in Atlanta and remember every time they’ve opened a new datacenter, their sales have immediately surged higher.  It’s like a staircase.  But even if Switch has an earnings explosion next year, it might be too expensive for me.  Nvidia trades at less than 50x next year’s earnings.  But Nvidia’s earnings are growing at a 41% clip.

On the other hand, Switch is the only pure play on the growth of datacenters out there.  CoreSite, Equinix, Digital Realty are all REITs, which means they need to distribute all their income back to investors.  In other words, they can’t invest as heavily in growth.  That means Switch has scarcity value.”

Waking Up, Ep. #27 (Ask Me Anything)

Sam Harris:

“The distinction between voluntary and involuntary action is an important one.  But it’s not one that requires a belief in free will to describe.  There are things we do based on intentions that align with our goals and desires and consciously held purposes.  And there are things that we do automatically or by accident.  When looking at even my most voluntary behavior, I see no evidence of free will.

Let’s say you’re deciding whether to marry your boyfriend or not, it doesn’t matter how long you think about that.  This could be the most deliberative decision of your life.  What finally swings the balance between ‘yes’ and ‘no’ is in principle mysterious, subjectively, and is arising out of causes and conditions that you did not create.  If you love this man’s smile, you didn’t create your love for it.  You love it precisely to the degree that you do, you didn’t create his smile in the first place, you didn’t create it’s effect on you, you didn’t create that this is something you care about.  And so it is with everything else that you like or dislike about this person.  Your association with marriage, how important it is to you, how idealistic you are about it, how urgently you feel you have to enter into it, the advice you get from friends and family and its effect on you…all these dials are getting tuned by forces you can’t see, did not engineer, and cannot control.  And even your moments of apparent control, well that just arises out of background causes that you can’t inspect and which are moving you to the precise degree that they are for reasons that are inscrutable.”

Very Bad Wizards, Ep. #77 (On the Moral Nature of Nazis, Jerks, and Ethicists)

Eric Schwitzgebel:

“In my view, a jerk is somone who culpably fails to appreciate the perspectives of other people around them, treating those people as tools to be manipulated or fools to be dealt with rather than moral and epistemic peers…it’s a kind or moral ignorance and epistemic ignorance.  So you’re ignorant of what you owe to those people and you’re ignorant of what those people, when they differ from you in their opinions, what kind of validity they might have in their perspective that’s contrary to yours…I wanted to make it ‘culpable’ ignorance because I didn’t want it to be the case that babies are jerks.”

 

[BLK – Blackrock] Not Another Passive vs. Active Debate

Posted By scuttleblurb On In [BLK] BlackRock | Comments Disabled
Back in June 2009, reeling from the financial crisis and desperate to fix its balance sheet, Barclays agreed to sell Barclays Global Investors (BGI) to BlackRock, who paid ~$13.5bn in cash and stock for BGI’s ~$1tn in assets, which included $385bn in AUM from its vaunted iShares ETF franchise.  Over the last 8 years, iShares [...]
To access this post, you must purchase Annual subscription [5] or Quarterly subscription [6].

[MCEM – Monarch Cement] Ash Grove Part Deux?

Posted By scuttleblurb On In [ASHG] Ash Grove Cement,[MCEM] Monarch Cement,Quick Blurbs | Comments Disabled
[MCEM – Monarch Cement] On Sept. 20, Ash Grove announced that it agreed to be acquired by its largest customer, CRH plc, a global producer of building materials cement with €27bn in sales and €3bn in EBITDA across 31 countries. [For background on Ash Grove, see these previous posts]. “While the final amount of the [...]
To access this post, you must purchase Annual subscription [5] or Quarterly subscription [6].

Podcast Blurbs [Small cap investing, Ackman on ADP, Cramer on Dexcom, Journalism]

Posted By scuttleblurb On In Podcast Blurbs | Comments Disabled

The Investors Podcast (9/17/17, Small Cap Investing w/ Eric Cinnamond)

Eric Cinnamond (a veteran small-cap manager…writes a good blog [7])

“Back [when I started my career], the largest holders of these small cap stocks were traditional small cap value managers like the Royce Funds, Heartland, Gabelli, and when prices got out of whack, they were there to police the market almost like a market maker.  But now when I look at the top holders, I see Vanguard, Dimensional, Blackrock.  They are price insensitive investors…this is what’s going to be a significant contributor to small cap opportunities in the future.”

“Cycles vary between industries and businesses, so you got to be careful not to use a standard 5 to 10 year [when normalizing profits].  The Schiller P/E is 10 years.  Well, 10 years could include 2 upcycles and 1 downcycle, so you’re not really normalizing.  Or it could include 2 down and 1 up.  I like to customize my normalization to the particular business or industry.”

“1993 to 2000, for most of that cycle, everything I touched turned to gold, it was unbelievable.  1993 to 1998, I mean I had just graduated from college, I was running trust money, $300mn, I couldn’t believe they had given me all this money to run and I was doing well.  And then in 1996, I joined Evergreen Funds as a small cap value manager and the trend continued. […]  It was the profit cycle and I had no idea.  I was a young analyst and I thought I was a genius.  But then of course the tech bubble hit and ‘wow’ that was a humbling experience, where I went from the genius to the idiot.  That was a very difficult cycle.  I’m most proud of 1999 because I lost 8%, I don’t know if you tried you could lose 8% in 1999 but somehow I did it because I ignored tech.

Bill Miller is right in that where you start can influence how you perceive yourself and how others perceive you.  You think about the average age of an analyst or manager now, I would guess it’s 8-10 years maybe?  And what’s the length of the cycle, 9 years?  So it’s interesting, lots of the investors now running billions and billions, they’re in the same position I was in in the ’90s when I thought I could do no wrong and I was bullet proof.  And that’s kind of scary. […] The losses that could occur in this cycle with where valuations are?  If you just revert to normal valuations, you could lose half your capital in small caps.”

“Mutual Fund cash levels are at 3%.  Meanwhile, there’s a survey that showed that 80%+ of portfolio managers believe stocks are as expensive as 2000.  So, you have a huge conflict here…so, if they think stocks are overvalued and they start losing 10%-30% of their clients’ capital, I don’t know how they’re going to respond.  I think because they know stocks are expensive, they’re going to be quicker to sell.”

“It’s been so long since we had a panic that when it does happen, it’s going to be so new to so many people, even people who are experienced, they haven’t seen it in so long. […] Everyone thinks they’ll be the first one out when the cycle ends, but if you’re running a billion of small cap money and the Russell 2000 drops 30%, I have news for you.  You’re not getting out.”

“Another way I screen for stocks is I do role playing, where I’m trying to pretend I’m a relative return manager with a really big house, country club membership, etc. and running $1bn+ and then I think to myself, all right, I’ve got 10 very large consultant meetings next week – what do I not want to talk about?  That is usually one of the best ways to be a contrarian.  And right now where would you look?  I would think you’re approaching the year-end performance panic…I think you might want to start looking at energy and retail.  Those might be the two most embarrassing sectors right now for professional managers.”

“A lot of high quality value investors won’t even consider commodity companies because we’re taught in school and in all the great books that they’re bad businesses.  But if I can buy natural gas in the ground for a dollar and it costs $2 to find and develop that or if I can buy an ounce of gold in the ground fully developed for $150/ounce and it cost $300/ounce to find and develop that, those are the kinds of things I’m interested in.  I view mining businesses more from valuing a balance sheet…I want to buy the reserves for less than it costs to replace them and that has worked very well for me over time.  Commodity stocks and other cyclicals are either extremely undervalued or extremely overvalued. […] but they need to have a good balance sheet, this is the key.  You need a runway and you need to determine the appropriate runway…Tidewater had $50/share in book value and it went bankrupt.”

Mad Money w/ Jim Cramer (10/4/17)

Bill Ackman on ADP:

“Our new question for ADP is ‘why is it that ADP has lower employee productivity than all of their competitors.’  So, ADP generates $160k revenue per employee; the competitors average $224k.  When you think about ADP, it has enormous scale vs. competitors, so if anything, they should have more efficiency.”

“25% of ADP competes directly with Paychex.  Same size customer.  So what we said to ADP was ‘Paychex has 41% pre-tax profit margins.  But they’re largely an SMB company.  ADP’s SMB segment, if it had the same margins as Paychex, it would mean the rest of their business has margins of 12%.  And that makes no sense.  So, clearly there’s a big opportunity.”

“We’re the third largest investor in the company by dollars spent.  We’ve got a $2.3bn investment in the company and we own almost 9mn shares in the company so our interests are very much aligned with the other owners of the company.  Look at all the directors on the Board.  Our candidates haven’t even joined the Board yet [and they’ve] spent more money to buy ADP common stock for themselves than the entire Board has spent in the last 14 years.”

Jim Cramer on Dexcom

“We’ve talked about the benefits of Dexcom’s technology before, but in the last couple years the stock’s begun to stall.  Then, last week the darn thing fell out of the sky, plummeting more than 30% in a single session…Dexcom’s problem is pretty straightforward.  A week ago, we learned that Abbott Labs had received FDA approval for its new FreeStyle Libre Flash Glucose Monitoring System for people with both Type 1 and Type 2 Diabetes…In the old days, if you wanted to check your blood sugar levels, you had to prick yourself and draw blood with one of those finger sticks.  With Dexcom’s system, you just wear a little sensor and get readings, but you still need to prick your finger twice a day to calibrate the machine.  Abbott’s new system requires zero finger stick calibration…plus, you can leave Abbott’s sensor on your arm for 10 days, longer than Dexcom’s current system (although the same as the system the company hopes to launch later this year)…it got hit with substantial price target cuts from 6 different firms, and that’s how a stock goes from $67 to $45 in a single day.”

“First, why was this such a surprise?  Wall Street expected that Abbott would have a harder time getting FDA approval or at least that it would take longer than it did.  But with earlier than anticipated approval, Abbott has leap frogged Dexcom.  Consider that Dexcom’s new system, which isn’t even out yet, still requires that you prick your finger once a day.  However, maybe we should have seen this coming.  Abbott’s Libre system was approved in Europe 3 years ago, it’s already being used by 350k people there.  There aren’t many cases where a product works just fine in Europe but the FDA decides to reject it anyway.  And remember, this is the Trump FDA, which means it’s very pro- business and less consumer safety.  Second, was Abbott’s pricing strategy.  They decided to be far more aggressive than anyone thought.  Abbott’s system will cost $4/day, Dexcom’s will set you back between $8 and $10 per day, including the cost of hardware.  And this is before any kind of insurance reimbursement for Abbott.  Even worse for Dexcom, Abbott told us they already had 5 of the largest pharmacies lined up to sell the thing with distribution planned to start in December.  This is brutal.  Before Abbott’s system got approval, Dexcom’s system was pretty much the only player.”

“Even if Dexcom doesn’t lose tons of market share to Abbott, they’re going to have to get more promotional to keep that business, which means that the company’s excellent hardware margins are going to come down hard.”

“Dexcom’s product is actually better than Abbott’s…it’s more accurate and more reliable…[based on the] average relevant difference between the measurements from the monitors based on the readings you get from a blood test.  Abbott’s system averages a 9.7% differential…Dexcom’s is more like 7.2%, sounds small but if you have diabetes the difference can matter.  Plus, Abbott’s system can lead to a lot of false positives.  Dexcom’s new system has also been shown to be more accurate than Abbott’s with no finger sticks at all.  One reason why Abbott’s system is cheaper is because it takes a much more bare bones approach.  One of the great things about Dexcom is that unlike Abbott’s system, their device gives you real time alerts or alarms, so if you fall asleep and your blood sugar gets too low, it’ll wake you up. […] With Dexcom submitting it’s new monitoring system to the FDA in the near future, I think the company might have a chance to turn things around.  Plus, they’ve already partnered with Google to develop a cheaper, smaller system that requires no prick finger calibration, that’s expected next year.”

The Ezra Klein Show (David Remnick on journalism in the Trump era and why he hires obsessives)

David Remnick (Editor of the New Yorker)

[In 1998, when Remnick took over as editor, The New Yorker was losing money].  “With The New Yorker, the zenith of advertising was in 1967.  The New Yorker, which was invented in 1925, really as an economic thing, just rode a postwar consumerist boom with the developing middle and upper middle class and all those ads.  And the reason The New Yorker started publishing 3 part and 4 part series was not only on the literary and journalistic merit but also the need to have editorial matter running next to…this travel agent and that department store, and this started to change.  Television became bigger, all these other media.  People’s tastes changed.  So, the zenith of advertising for The New Yorker was the last ’60s.

And thereafter, there was a rather slow slide down.  It wasn’t perceptible.  The New Yorker was independent, it was owned by the Fleischmann family and it still made a profit.  The Fleischmanns began to care too late, I would say.  And the Newhouses bought it, it was kind of on the brink, red and black, and then it was distinctly in the red for a good while.  The question was how to change that.  And Tina Brown did a lot of great things to arouse interest in the magazine…and advertising continued to go down, no matter what.  It was very clear after a while to me, this is before tech even really boomed, that despite the tech advertising bubble and Red Envelope, that 1967 was not going to return. […]

We’re now in a situation where Google and Facebook own 2/3 to 3/4 of web advertising.  Retail has changed in this country, there are more options…and there are only two ways to make money in this business.  Advertising and what’s gently called consumers, meaning the readers, what they pay.  And 25 bucks for 52 issues was crazy…all you’re paying for it is 50c?  Less than one issue of the newspaper on the newstand at the time?  So the proposition now is that for a subscription, for each week you might pay what you’d pay for a small cappuccino at Starbucks.  And I really think that what we turn out on the web every day, what we publish in print and online immediately midnight Monday, is worth that at least.  And our readers have agreed and we’ve been making a handsome profit for quite some time.

The happy coincidence is that our readers want what we do when we are at our best.  The worst thing I can do, not only as a moral and journalistic proposition, is dumb the magazine down, but it would be the stupidest thing I can do as a business proposition.  That’s [not] what our readers want.  I don’t need consultants or polls to tell me that.”

“I was always the oldest guy at these early internet dinners or events, and I knew why I was invited, as the kind of editor of a ‘legacy’ media outlet, which of course the glint in the eyes of my younger brothers and sisters, was that I was soon to become like the stegosaurus itself, dead in a ditch.  And there were certain truisms I would be hearing: 1) no one would pay for any content because information wants to be free, which was a misapplication of what that phrase meant; 2) nobody would read anything on the internet of any length.  That was also an evangelical, hard truism that I was hearing all the time.  And I was wrong about a lot of things, but those two things they were wrong about.  People will pay for things that are extraordinary.  People will read things that are great.  It may not be 330mn Americans. […]

I remember once I was interviewing Philip Roth and he was deploring the state of fiction reading audiences.  This was before he became, again, a best selling author.  So he was in a kind of despairing mode.  But to cheer himself up he said, ‘if you write a novel and only 5,000 people buy it and read it, that may seem depressing, but if all 5,000 people streamed through your living room and shook your hand and said “thanks for the 4 evenings that I spent reading this novel”, you would be brought to tears with gratitude.’…Look at the readership of The New Yorker.  There’s a million now…1.25mn readers out of a country of 330mn people.  But if I imagine them as Yankee Stadiums full of people, that’s a whole lot of people being absorbed in texts that are often enigmatic, complicated, take time to read.  I am filled with gratitude.”

[V – Visa; MA – Mastercard] Expanding the Rails, Part 1

Posted By scuttleblurb On In [MA] Mastercard,[V] Visa | Comments Disabled
The basic moatiness of V/MA has been tirelessly discussed for years and is well understood, so I’ll try not to belabor the point and focus instead on some less frequently explored angles.  But I do need to establish a common denominator, starting with the 4-party model….apologies if you know all this, I’ll be quick about [...]
To access this post, you must purchase Annual subscription [5] or Quarterly subscription [6].