- scuttleblurb - https://www.scuttleblurb.com -

Blurbs [Online travel, B2B2C implementation]

Executive Roundtable: Street Talk (11/9/17), Phocuswright Conference

Rachael Rothman, Sr. Analyst, Gaming, Lodging, and Leisure at Susquehanna Financial Group

(on hotel demand)

“I think we know from the industry-wide data that there is a definite shift to book direct…I would also just highlight that going back 30 years from hotel school, we always thought of brands as occupancy insurance and now that we’re getting 92 months into the recovery, I think you’re going to see that brand power come back into effect, and you’ve seen some of the hotel owners that have moved away from branded product and that relied on things like Expedia and Priceline actually suffer and underperform.  And so I think it’s being proven out to the owners and to the operators that the brands actually are working and scale is working.  Occupancies are at all time highs in the hotel industry, my stocks are at all time highs, and there are no signs [of] waning demand…there’s not a ton of pricing power but it isn’t a demand issue.”

(on hotels possibly filling the OTA gap on meta spend)

“They are unlikely to step in to fill that gap.  I think historically that has not necessarily been a customer that they wanted.  I view it as someone who’s pretty brand agnostic and price sensitive.  I think what you could see though is some of the bigger brands stepping in with something like Instagram and Facebook and saying ‘hey Rachel, I see that you’re celebrating you’re 10-year wedding anniversary at the Ritz Carlton in Naples…how about next year you go to the Ritz Carlton Dana Point and you book today and we give you 20% off’.  They already know I’m a loyal Ritz Carlton customer, they can see that I’m taking photos and interacting…and they can go direct to the customer with a targeted offer.”

(on alternative accommodations)

“First, there is some thought that it takes away pricing power on compression nights.  So, that would be if you had SXSW in Austin, TX for example, historically maybe you could have raised your room rate by 30%, now you can only raise it by 10%.  But, we also have to consider that Airbnb’s supply is flexible, meaning that people put their capacity on when rates are the highest and I personally am of the belief that Hilton and Marriott and their owners’ balance sheets are built for a recession.  When we go into a recession, the Airbnb owners, many of them have extended themselves into having multiple properties and when they find that they can only rent that home for 30 bucks and it’s either $100 or 5 of their own hard labor hours to clean it, you’re going to see a lot of that capacity come off the market and I think it’s going to be the same balance sheet lesson that a lot of individual homeowners learned in the 2008 recession.”

(on loyalty discounts from hotel chains)

“I think it’s working, I think it’s a big deal…Expedia may be able to offer you a free flight or free whatever, what they can’t offer you is 9am check-in, 4pm check-out, free breakfast, unlimited free cocktails, any sort of amenity that any one of these hotel owners or operators can offer to their customers.”

Lloyd Walmsley, Managing Director, UBS

(on OTAs pulling back on meta)

“Priceline has been the most vocal about pulling back and they had spent a lot of money on trivago over the last 2 years and I think trivago was pushing pretty hard and you have a new management team come in and decide to take the strategy a little more aggressively.  They had been funding a competitor in search channel so I think it makes eminent sense to try to reset that auction.  Priceline has spent some money in TV historically but booking.com’s brand in all of our survey work has still lagged that of peers, so I think it makes sense to be building a brand…I think Google is going to continue to move further and further into the travel vertical and that poses obvious risks if you don’t have a strong brand.”

(on TV spending)

“[TV spending] makes the online spend more efficient.  So, Google obviously has a quality score and the higher your click-through rate is the less you pay for ads.  Kayak, when they were public, gave us enough disclosure as part of the IPO process that we could see when they started ramping their TV spend, in the first two years after they ramped their TV spend, the cost that they had to pay for their digitally acquired clicks was cut in half.  I wouldn’t expect the same magnitude for a brand as big as booking.com but there are secondary benefits to being on TV.”

Eric Sheridan, Managing Director, UBS

(on loyalty discounts from hotel chains)

“Phocuswright put up a lot of their own data saying that loyalty rewards don’t actually drive as much velocity of shopping in travel.  We’ve done consumer intention surveys that say similar things.  The business market seems like it’s much more driven by loyalty rewards than the consumer marketplace…I would expect over time that the OTAs and maybe Airbnb explore loyalty and rewards.”

(on Amazon, Facebook, Google getting into travel)

“Amazon’s tried a couple times at this more in beta mode and never really gotten much further than that…I think that the inventory is so fragmented on a global scale that in order to achieve scale, I think it would take quite a long time to achieve the scale benefits that Expedia and Priceline have on the inventory side.  We’ve always been fairly dismissive of the concept that Google will become an OTA.  Google wants more of their partners’ marketing budgets by delivering more qualified leads and delivering more CPCs from it.  So, from our view, Google’s always wanted to own more top-funnel than the actual bottom part of the funnel.”

Joint Interview with Expedia guys (Phocuswright Europe, 5/22/17), Phocuswright Conference

Cyril Ranque, President, Lodging Partner Services, Expedia Inc., Expedia

(on providing technology for hotels and being a “platform company”)

“The idea is pretty simple.  We’ve proven that we can take the platform from our brand and power other brands very effectively in the OTA space and now the idea is to take the same platform and leverage it for hotel partners and allow them to access all the benefits of that technology…and the thinking is if we are improving the customer experience regardless of where the customer wants to book, be it on Expedia or hotels.com or on brand.com…that translates into more disposable income spent on travel.  It should be good for the industry and if we power all these parts of the ecosystem, we’ll get a share of the revenue…hotels have needs for pricing, they need data to price correctly.  We provide them access with competitive data, market demand, etc. so they can do their pricing, it can be a chain or individual hotel, we can provide data that a small hotel would not have access to to optimize their pricing.

Then, after they’ve done their pricing, they need to attract consumers to their website…we launched a product that allows them to spend their marketing to attract consumers from Expedia and hotels.com directly to their website, which was unthinkable a few years ago…and then the next step is powering their website to make it more effective and increase their conversion.  We’re doing this with Marriott on vacations and Vacations by Marriott has grown tremendously…then after that you get to the guest experience, we’ve invested in a company called Alice in which we have a minority share which optimizes hotel operations.  We also allow hotels to sign up more loyalty members…we did a test with Red Lion.  Then we provide real time feedback to increase the customer satisfaction by treating problems that arise on property before customers write a review later on.”

Booking.com Executive Interview – Phocuswright India 2017 (3/13/17), Phocuswright Conference

Oliver Hua, Managing Director, APAC, Booking.com

“Three years ago we had less than 3,000 hotel partners in India.  Now, today we have over 20,000.  And the number of room nights per partner remains roughly steady…it’s pretty significant but we’re still in early stages of growth.  We’re seeing high double-digit growth for multiple years now and we expect that to continue.

Our strategy in China is two-fold.  We develop our own business, China is a major source market for us.  The majority of APAC destinations – Japan, Korea, Thailand – are quite dependent on Chinese inbound.  We have nearly a thousand people working in our call center in Shanghai.  Then the second prong of our strategy is the partnership we have with Ctrip that evolved from a commercial partnership where they were a distribution partner for us into an equity partnership in which we invested pretty heavily into the company in 2014 and 2015…Ctrip has been leading industry consolidation in China, they’ve been rolling out new product and services…and the fact that they’re gaining share in the market is helping us as well because through them we just get a larger audience for our inventory.  Our relationship with agoda, our sister company, is very similar to how we work with Ctrip in China.  They are two separate brands that operate completely independently of each other.”

(on the long tail of properties)

“In Japan, it’s a well known fact that we have a situation where the market is essentially undersupplied from a traditional hotel accommodation perspective, and then you have a lot of long-tail properties that’s unoccupied because of the shrinking population.  The demographic change and migration to large cities…you have a lot of apartments, short-term rentals that’s available for rent and that’s a market we’re definitely very much committed to.  We actually have plenty of that type of inventory that’s available on our site for instant booking and immediate confirmation.  That’s how we differentiate from other offerings.  When you come to booking.com and you book a long-tail property, the customer experience is exactly the same as booking a traditional global chain hotel.”

a16z Podcast (10/28/17; B2B2C Business Models — Trick or Treat?)

Martin Casado

“One of the biggest mistakes I see is ‘listen, we’re working with system integrators, we’re working with MSPs [managed service providers] because they somehow think that’s going to give them reach to a bunch of customers, basically it never pans out…in mature markets it sometimes pans out, but in pre-chasm markets, I don’t think it ever does.  [Pre-chasm meaning] there’s no market category, there’s no budget, the customer isn’t educated about what you’re doing.  And the reason it doesn’t work out is because…a lot of the enterprise actually purchases from a reseller, not from the vendor directly…and the thing is [resellers] don’t have the salesforce to carry pre-chasm products.  They’re good at distributing things where there’s a known budget, but if you’re doing something fundamentally new, there’s no way that a VAR can pitch, educate the customer, and so forth, so normally you have to create a pull-based market before you can actually engage partners.

In the enterprise, the two ends are the vendor, which creates the technology, and the customer, which consumes the technology.  In a direct sales model, the vendor creates a sales team and the sales team shows up to the customer and manages that customer.  So, let’s say you create a widget in a mature market, so the customer doesn’t have to be educated.  And you’re able to get a general partner to sell it.  One of the big problems is you don’t have a relationship with the customer.  So much of enterprise dynamics come from renewals, expansions, and upsells.  Often hyperlinearity in growth comes from expansions, so you actually have to have a relationship with the customer in order to do that.  And so going that route is fraught with peril for startups.  You are not the one that is actually bringing the product to the end customer…you don’t know what they want, you don’t know what they need, and you won’t have the leverage point to expand that sale […]

The biggest jump in operational complexity a start-up will ever do is when it goes from one product to two products.  So, if you do introduce a second product, if you can align it with the same constituency, the same buyer, that’s the best…a natural response of start-ups of not having product/market fit is building another product, which I think is probably the worst thing you can do.

Over time, R&D pencils out as almost a fixed cost or super sublinear but sales scales linearly with people on the ground, so if you hire more sales people, they’re very expensive but to get more dollars you need more sales people.  So, there’s this dream that you can have somebody else bare that cost for you…the problem is that the channel doesn’t have the salesforce to push what you’re doing…the lifecycle that normally works is the startup tries that, figures out that it doesn’t work, but maintains relationships with these channel providers.  Start-up then builds a direct salesforce and creates awareness in the market and starts to sell it.  Once you’ve started to sell it, you’ve actually created now a market and the market isn’t a product market but a market of services around your product.  So I sell something to the enterprise just to do a proof-of-product, requires some implementation, and often companies will pay for that, and then after you’ve sold it, that requires professional services along with it.  Once you’ve sold enough gear, those markets arise and now you have something you can incent a channel partner with….so now you actually have a market to incent them to invest in training, to get a relationship with the customer, etc.

It’s incumbent on the startup to create the market and once you’ve created the market, you have sufficient leverage to turn on the channel.”

Alex Rampell

“The other challenge with B2B2C is that if your C at the end is coming from the B in the model, then the two business models endanger one another.  There’s a company called Yodlee, it’s been around for around 20 years, it’s a key part of the ecosystem for every fintech company that gets data from banks.  So if you ever go to E-Trade and it says ‘log into your Bank of America account to wire funds’, that’s going through Yodlee.  It turns out they get all of that information in aggregate and they have a different business model which is they sell anonymized aggregated information that they’re collecting from everybody, all the businesses they are working with Yodlee.  If they do that too aggressively, it puts them at odds with their core, main business…you can go from being a symbiote [with your middle B] to a parasite or even an antagonist if you start doing things that are competitive with what they’re doing or hurt their primary business model.”

(when B2B2C works)

“I think Rakuten is a good example of this because if you are a bread merchant and there’s another meat merchant you can work with and you realize the meat merchant sells more meat and the bread merchant sells more bread if there’s one communal shopping mall for them all to work with, they don’t want the consumer to sign up directly, they want the consumer to sign up in that shopping mall.  That’s highly symbiotic…there it’s Rakuten getting the end consumer, but those two intermediate merchants have an incentive for Rakuten to own that consumer because it makes the whole thing work.”

 

[CSGP – CoStar Group; REIS – Reis Inc.] Tale of Two Data Providers

Posted By scuttleblurb On In [CSGP] CoStar Group,[REIS] Reis Inc. | Comments Disabled

Before CoStar came onto the scene in 1987, getting clean and current data on rental rates, vacancies, absorption, and precedent comps needed to transact in commercial real estate was a vexing, ad hoc process.  In the early/mid-90s, juniors staffed across various financial institutions devoted significant chunks of their workweeks collecting and scrubbing this information.  Or […]

To access this post, you must purchase Annual subscription [1] or Quarterly subscription [2].

[TripAdvisor, Trivago, OTAs] Thoughts on the Carnage

Posted By scuttleblurb On In SAMPLE POSTS,[TRIP] Tripadvisor,[TRVG] Trivago | 1 Comment

Trivago’s “relevance assessment dimension”, implemented in late 2016, is an algorithmic adjustment that compels hotel advertisers to improve their landing sites and booking engines if they want to rank higher in trivago’s search results.  The idea is that while the user experience starts with a room search on trivago, it extends to when she clicks off to actually book the room on the advertiser’s site…so if the advertiser screws up that last step (according to trivago), it will have to pay more for each referral.  One consequence of this change was that trivago penalized OTAs whose links sent users to yet another page of search results on OTA.com rather than directly to the property that the OTA listed on trivago.

While trivago technically has 200+ advertisers competing for placement in its marketplace, two of them, Expedia and Priceline, respectively comprise 36% and 43% of the company’s revenue. [Expedia acquired 63% of trivago from early investors in 2013 and continues to own 60% of the company post its December 2016 IPO].  It’s usually not a good idea to behave like a powerful aggregator towards two dominant customers who actually are powerful aggregators when you, actually, are not…but that’s essentially what trivago did, tasking its algorithm to extract the most value from advertisers in zero-sum fashion while providing CRM, bidding, and booking tools for smaller hotels – including “express booking” where trivago actually hosts the booking site on behalf of the advertiser – to compete more effectively against the OTA giants with the aim of stoking greater bid density and pushing the agencies, in trivago’s own words, towards “the pain points of their profitability targets.”

In the first several quarters after implementing relevance assessment, trivago saw qualified referrals ~+60% y/y and revenue per qualified referral (RPQR) growth of +4%-4.5%.  The company admonished that RPQR would be lower (or, euphemistically, “normalized”) in the second half of 2017 since as advertisers adapted their sites to trivago’s relevance assessment standards, they would be not be required to bid as much for traffic.  No big deal.  But then things took a turn for the worse.  On 9/6/17, trivago announced that revenue growth for the full year would be more like 40% instead of 50% and EBITDA would be lower than guided too, as the RPQR hit turned out to be worse than expected.

The charitable interpretation to this bleak outcome, the line that management continuously parrots to investors, is that by optimizing the user experience, trivago is nobly sacrificing near-term profits for the sake of long-term gain.  Management understands that having loyal users is the key to spinning up a platform that gives you license to marginalize suppliers (advertisers, in this case), and so trivago is splurging on TV advertising [over 90% of the company’s revenue is dedicated to sales and marketing], assiduously monitoring the results, and iteratively tweaking campaigns towards the aim of building brand value.  At the same time, by adjusting its bidding algorithm and forcing suppliers to play ball, it is ensuring that users have the most seamless search and booking experiences possible.

But it’s not clear to me why Trivago feels uniquely positioned to accomplish the task of creating memorable ads or whatever it is that they think drives persistent site visits.  Because unlike, say, a SaaS model, where the journey from site visits to free trials to paid subscriptions sucks the user into ever deeper states of captivity that can, in theory, generate sticky, layered recurring revenue streams, what is the lock-in mechanism here?  At least TripAdvisor can claim authentic and current user-generated reviews.  Google began with a superior mousetrap and didn’t need to spend gobs on advertising to attract users (plus, because general search is so frequently used, it is habit-forming in a way that travel-specific search is not).  Trivago’s vertical search has, well…what exactly…to keep users continuously coming back once they have clicked off the site?  And furthermore, what can’t be replicated?  Expedia offers its own version of relevance assessment, its Accelerator program encouraging hotel properties to graduate up the Expedia listings page by paying extra commissions or by improving quality scores.

Growth in qualified referrals and referral revenue have decelerated in dramatic fashion.  No bueno:

[Definition of qualified referrals from the F-1: “We define a qualified referral as a unique visitor per day that generates at least one referral. For example, if a single visitor clicks on multiple hotel offers in our search results in a given day, they count as multiple referrals, but as only one qualified referral. While we charge advertisers for every referral, we believe that the qualified referral metric is a helpful proxy for the number of unique visitors to our site with booking intent, which is the type of visitor our advertisers are interested in and which we believe supports bidding levels in our marketplace.”]

And with that, the potency of trivago’s brand advertising also appears to have waned, as the company experienced significant y/y de-leverage on sales and marketing in the latest quarter and declining returns on ad spend over the last 2 quarters:

ROAS weakness also happens to coincide with TripAdvisor’s renewed commitment to brand advertising this year, so on top of volume weakness, perhaps TRVG is also witnessing pricing pressure on ad units? [After spending $51mn on TV advertising in 2015, TripAdvisor reallocated marketing dollars to online search and spent nothing at all on TV in 2016.  They’re committing $70mn-$80mn this year as part of a multi-year brand ad campaign]. 

If online travel were fragmented up and down the value chain, then being the first to spend aggressively on brand advertising for the sake of creating a liquid marketplace that then itself becomes the value proposition, might just work.  The numbers are tempting.  Global online hotel bookings of ~$145bn comprise around 1/3 of the total offline + online hotel bookings and are taking share from the offline channel.  At a 15% take rate, that’s a $22bn addressable market growing low double-digits annually.  On its current revenue base of $1bn, claiming even a small share of that could drastically move the dial.  But the question of course is, can you grab share at compelling economics?  I don’t understand the fundamental value proposition offered by trivago that cannot be offered equally well by many other top-of-funnel peers or even further down-funnel for that matter.

This is why I find I Trivago’s competitive positioning so precarious: it doesn’t possess the bargaining power to procure traffic at advantaged cost nor an irreplicable process to transform that traffic into value so compelling and unique that even their powerful customers will cede economic ground.  Online travel is increasingly dominated by aggregators further downstream who have myriad acquisition channels – including Facebook, Google, and direct brand advertising – through which to lure travelers.  And as in any highly competitive market, attempting to generate sustainable value off brand advertising is an unwinnable game unless there is a differentiating resource at the core.

At the Citi Tech Conference last month, when asked about competitors recently copying trivago’s strategy, the company could offer only the following effete non-statement:

“I think the only sustainable competitive advantage that you can have is to continue to be ahead of your competition. And so, the competitive response is to continue to innovate in marketing and in product and make sure that there is always a gap between yourself and competitors that are copying what has worked very well for you. I think that sounds generic, but I think that’s the only thing you can do.”

TRVG’s management maintains that its can sustain 25% EBITDA margins at some point (better than Expedia’s high-teens EBITDA margins).  I doubt it.

TripAdvisor is the Twitter of online travel: a unique, hard-to-replicate asset that eludes monetization but has significant strategic value.  There’s clearly a double marginalization problem to be solved via vertical acquisition, which TRIP Chairman Greg Maffei seems open to.  And that might really be the primary reason to hold on to the stock.  Well, that, plus the non-hotel side of the business (attractions + restaurants) is killing it, growing revenue by 25%-30% over the last year and solidly profitability.  That business is probably worth ~$1.5bn (4.5x revenue), leaving $2.4bn in enterprise value for a hotel business, one facing revenue and cost pressures, doing around $200mn in EBITDA (after stock comp).  By comparison, trivago’s enterprise value is $1.8bn, and they’re doing only $13mn in EBITDA.  The value disparity makes little sense.

[Re: “hard-to-replicate”, as I previous wrote:

“Over 360mn people visit the company’s site every month to plan their trips because they trust its deep fount of nearly 500mn authentic and current user-generated reviews and 90mn photos on 7mn hotels, attractions, and restaurants.  Those travelers, upon completing their trips, post their own reviews, contributing to a burgeoning body of shared knowledge that drives traffic through better search engine rankings and compels still more potential travelers to visit Tripadvisor at the start of their research process.  The company further stokes participation by offering badges and other marks of distinction to particularly helpful and active reviewers.  Hoteliers, well aware of Tripadvisor’s critical top-of-funnel role, make a special effort to respond to consumer reviews.  If you’ve stayed at a hotel boutique, you will have no doubt been encouraged at some point to leave a review on Tripadvisor by the hotel manager, who often proudly plasters the property’s Tripadvisor rating on the front window as a point of differentiation.  It would be monstrously difficult to recreate the breadth and depth of TRIP’s reviews.”]

[“Monstrously difficult”?  A bit hyperbolic on my part.  In theory, I guess I don’t really see why the Priceline, which already has over 135mn hotel reviews, couldn’t expand its share as it garners more direct traffic through brand advertising]

In prior quarters, the y/y decline in TRIP’s revenue per hotel shopper was largely attributed to a mix shift from desktop to mobile, a concern alleviated by the hope that mobile monetization improvement would eventually overcome such dilution.  But now, bid-downs by Priceline, which is shifting ad dollars to brand advertising after years of diminishing ROI on performance marketing, have whacked monetization on the desktop side and confounded several quarters of positively inflecting trends.

After a two-year hiatus, TripAdvisor also recently began splurging on TV advertising…so, on top of getting hosed by its largest customer on the revenue side, TripAdvisor is now competing with Priceline for TV ad spots as both pursue a common goal of driving more direct traffic to their own sites.  It’s hard not to be cynical about TripAdvisor’s standalone role in the value chain.

So, with trivago implicitly raising bid prices and both trivago and TripAdvisor trying (and, in the latter case, failing) to encroach directly upon bookings, it appears that Priceline is finally saying “nuh-uh” and using bid downs as part of a bargaining tactic to keep suppliers in check.  Whether the shift from performance to branded advertising is structural seems inconclusive.  Recent comments from Priceline CEO Glenn Fogel:

“I think one of the things very important to recognize is the dynamic nature of how the performance marketing works. So while we can make change in terms of how much money we want to spend and we where we want to spend it, our partners are also making changes all the time, and other people and auctions are making changes. So, this is dynamic and interactive, so it’s difficult to project long term what’s going to happen.”

Still, Priceline has been talking about pressure on performance ad returns for some time and even as Expedia professes loyalty towards trivago as an acquisition channel, it admits that meta search generates lower “repeat propensity” than search engine marketing.  In any case, what seems abundantly clear is that TripAdvisor and trivago, who derive 46% and 79% of revenue, respectively, from Priceline and Expedia, are really in no place to dictate terms.  Generating extra-normal profits as standalone entities, like the kind implied by the obligatory “small x% of big $TAM” exhibit that these guys all like to use, requires TRIP and TRVG either claiming a fair share of extraordinary surplus or an unfair share of modest surplus.  The absence of a uniquely compelling value proposition impedes the former; industry structure constrains the latter.

Implicit in my TRVG/TRIP bashing, however, is that value in the this industry accrues a level below and in that spirit, Expedia could be interesting.  EXPE sold off last week as the company noted that its cost structure would be larded with investments related to accelerated hotel on-boarding [3 years ago, EXPE was adding 25k-30k hotels / year, this year it’ll be 80k, and will “step change” in future years], cloud computing [a 2-3 year transition.  $100mn this year, much greater than management’s guidance a year ago, growing by over 50% next year], and marketing [as management turns its attention to deepening local marketplace liquidity after years of broad-based acquisition].

Expedia isn’t the cleanest company with the strongest moat – the core OTA is dependent on Google for traffic and faces competition from a consolidating supplier base, HomeAway is up against AirBnB, tech stack integration across a slew of acquisitions appears to have been sloppy – but as the second largest OTA by bookable properties next to Priceline, the company has certainly crossed the threshold of critical scale and fostered a sustainably profitable two-sided marketplace.  Disintermediation concerns stemming from an increasingly consolidated supplier base and worries about Google/Facebook aggressively moving into the space, have plagued OTAs for years…but Priceline and Expedia have done just fine as continuous investment in technology, marketing, hotel relationships, and vigorous A/B testing have congealed into a hard-to-replicate value proposition for suppliers looking to offload inherently perishable inventory and travel shoppers looking to dependably source the broadest, most relevant selection at the lowest price, with increasing participation on each side of the platform begetting buy-in from the other.

[Re: A/B testing, as one Twitter friend put it…

HomeAway, acquired for what seemed like a pricey $3.9bn in December 2015, has been growing rapidly (+40%-50% y/y) and profitably in the face of competition from AirBnB and Priceline, and now seems like a pretty smart buy.  And Expedia is still in the process of making all ~100k vacation rentals available exclusively online (some bookings are currently arranged offline between guest and host) [3], and has not yet really begun to pursue international markets or fuse Homeaway listings with inventory from its core OTA sites in cities.

When I strip out trivago and stock comp (see below), it looks like Expedia is trading for around 11x EBITDA and 17x FCFE, which seems reasonable to me even if we grant that EBITDA growth will slow to the bottom end of the +10%-20% range (or even somewhat below) for the next few years on accelerated spending…and it looks quite cheap if we think that by weaning itself off acquisitions, dedicating itself to organically deepening engagement, broadening the platform through aggressive on-boarding, and boosting overall productivity by partly shifting its tech infrastructure to the cloud, Expedia can drive accelerated bookings growth and margin expansion 3 years out.  At the very least, I think we can be far more confident that Expedia’s investments offer a reasonable return than that trivago’s continuous spending on TV commercials will ignite sustainable platform activity.

($ millions except per share data)

EXPE TEV ex. TRVG cash           19,159
TRVG stock price $            7.17
x # TRVG shares owned by EXPE                 209
=             1,499
Adj. EXPE TEV           17,661
EBITDA ex. TRVG             1,589
multiple11.1x
FCFE ex. TRVG             1,249
Stock comp ex. TRVG                 135
EXPE FCFE ex. TRVG ex. stock comp             1,114
/share $           7.06
multiple17.4x

You can also own Expedia through Liberty Expedia (LEXEA), which owns 15.5% of Expedia’s common stock representing a 51.9% voting interest in Expedia…but, I don’t think there’s a compelling “arb” here.  LEXEA split off from Liberty Ventures a year ago for the purpose of Expedia eventually purchasing LEXEA’s EXPE shares.  Liberty Expedia also owns an internet retailer of health and dietary supplements called Vitalize (formerly known as Bodybuilding.com), which, based on declining revenue and profits, isn’t doing so hot, and has deteriorated to such an extent that it is small enough to be unceremoniously lumped into “corporate and other”.  It does around $316mn in trailing revenue with negligible OIBDA.

You are getting 0.41 shares of EXPE for every 1 share of LEXEA that you own.  LEXEA also has around $5.40 in net debt / share.  So the NAV breaks down like this…

 NAV 
 Expedia $          50.43
 Net debt $          (5.36)
 Vitalize ???
 Total $          45.07

…vs. LEXEA’s current share price of $46.  The delta between NAV and the LEXEA share price values Vitalize at around 0.2x trailing revenue.  Seems fair.  Whatever.

Priceline’s stock also sold off post-earnings on decelerating bookings (from ~mid-20s y/y ex. fx growth over the last 4 quarters to 16% in the latest quarter).  While size constraints may translate into slower growth relative to the past, there’s plenty of runway ahead.  Its largest online property, Booking.com, has an insurmountable moat in a fragmented European market [in Europe, independent lodging comprises 67% of total rooms vs. 30% in the US] where I estimate it claims around 40% of European online accommodation bookings, or about 20% of total European bookings.  Globally, Priceline’s ~$80bn of total gross bookings is just 20% of online hotel bookings, or about 6%-7% of total online + offline.  Room nights +19%, the number of bookable properties +41% (including vacation rentals +58%) during the most recent quarter, and the meta properties, Kayak and (more recently) Momondo, are growing and profitable.  OpenTable, on which the company took a huge impairment charge last year, has sucked, but I think we’re past that.  I don’t see any meaningful impediments to Priceline continuing to grow its cash earnings per share by mid-teens+ for the foreseeable future.

So yea, setting aside the takeout aspect for TripAdvisor and just evaluating these companies on their standalone long-term value creation potential, I would  rather own Priceline (17x EBITDA backing out long-term investments, including Ctrip) or Expedia (11x), respectively, than either Trivago (NM) or TripAdvisor (17x).

[ODFL – Old Dominion Freight Line] Superb Logistics Company

Posted By scuttleblurb On In [ODFL] Old Dominion | Comments Disabled

I don’t think there’s anything to do here given the lofty valuation (30x on what feel like peak earnings), but I like this company and thought it was worth a quick shout out.  Old Dominion is an incredibly well-run business staffed with high-caliber folks who care deeply about their work.  The first thing to understand […]

To access this post, you must purchase Annual subscription [1] or Quarterly subscription [2].

[ADI – Analog Devices] A Silent Hero

Posted By scuttleblurb On In [ADI] Analog Devices | Comments Disabled

Almost every electronic device you use on a daily basis is infested with analog chips that monitor, amplify, and transform real world phenomena (weight, light, temperature, amplitude, power), into digital signals that electrical systems can understand.  These chips are the silent and unappreciated heroes that enable safety and infotainment features in cars, wireless infrastructure equipment, […]

To access this post, you must purchase Annual subscription [1] or Quarterly subscription [2].

[WIX – Wix.com] Scaling Profitably

Posted By scuttleblurb On In [WIX] Wix.com | Comments Disabled

Sometime in late 2015, I built my first website.  Or rather, I purchased a subscription on weebly.com, a website builder, and proceeded to upload photos and drag and drop pre-configured modules into a blank screen.  The process was easy, which helped as I was clueless; the layout was unoriginal and austere, which sufficed as the […]

To access this post, you must purchase Annual subscription [1] or Quarterly subscription [2].

[EQIX – Equinix; INXN – Interxion] Network Effects in a Box

Posted By scuttleblurb On In SAMPLE POSTS,[EQIX] Equinix,[INXN] Interxion Holdings | Comments Disabled

The “internet”, as the name implies, is a network of networks.  Scuttleblurb.com is sitting on server somewhere connected to an IP network different from the one your device is connected to and the fact that you are reading this means those two networks are communicating.  Likewise, if your internet service provider is Charter and you’d like to send an email to a friend whose ISP is Comcast, the networks of Charter and Comcast need a way to trade data traffic (“peer” with one another).  In the nascent days of the internet, different networks did this at Network Access Points established and operated by non-profits and the government.  Over time, large telecom carriers, who owned the core networks, took control of coordinating peering activity and small carriers that wished to exchange traffic with them were forced to house switching equipment on their premises.

Eventually, most peering agreements moved to “carrier-neutral” Internet Exchange Points (“IXs” or “IXPs”, data centers like those owned and/or operated by Equinix and Interxion) that were independent of any single carrier.  Today, global carrier neutral colocation/interconnection revenue of $15bn exceeds that of bandwidth provider colo by a factor of two as major telcos have seen their exchange businesses wither.  At first, service providers landed hooks at these neutral exchange points…and then came the content providers, financial institutions, and enterprises, in that order.  Customers at these neutral exchange points can connect to a single port and access hundreds of carriers and ISPs within a single data center or cluster of data centers.  Alternatively, a B2B enterprise that wants to sync to its partners and customers without enduring the congestion of public peering [on a “shared fabric” or “peering fabric”, where multiple parties interconnect their networks at a single port] can establish private “cross-connects”, or cables that directly tether its equipment to that of its customers within the same DC [in an intracampus cross connect, the DC operator connects multiple datacenters with fiber optic cables, giving customers access to customers located in other DC buildings].  

[To get a sense of how consequential network peering is to experiencing the web as we know it today, here’s an account of a de-peering incident, as told by Andrew Blum in his book Tubes: Behind the Scenes at the Internet:

“In one famous de-peering episode in 2008, Sprint stopped peering with Cogent for three days.  As a result, 3.3% of global Internet addresses ‘partitioned’, meaning they were cut off from the rest of the Internet…Any network that was ‘single-homed’ behind Sprint or Cogent – meaning they relied on the network exclusively to get to the rest of the Internet – was unable to reach any network that was ‘single-homed’ behind the other.  Among the better-known ‘captives’ behind Sprint were the US Department of Justice, the Commonwealth of Massachusetts, and Northrop Grumman; behind Cogent were NASA, ING Canada, and the New York court system.  Emails between the two camps couldn’t be delivered.  Their websites appeared to be unavailable, the connection unable to be established.”]

The benefits of colocation seem obvious enough.  Even the $2mn+ of capital you’re laying out upfront for a small 10k sf private build [4]is a pittance compared to the recurring expenses – taxes, staff, and maintenance, and especially power – of operating it.  You won’t get the latency benefits of cross-connecting with customers, you’ll pay costly networking tolls to local transit providers, and you’re probably not even going to be using all that built capacity most of the time anyhow. 

There’s this theory in urban economics called “economies of agglomeration”, which posits that firms in related industries achieve scale economies by clustering together in a confined region, as the concentration of related companies attracts deep, specialized pools of labor and suppliers that can be accessed more cost effectively when they are together in one place, and results in technology and knowledge spillovers.  For instance, the dense concentration of asset managers in Manhattan attracts newly minted MBAs looking for jobs, service providers scouring for clients, and management teams pitching debt and equity offerings.  Analysts at these shops can easily and informally get together and share ideas.  This set of knowledge and resources, in turn, compels asset managers to set up shop in Manhattan, reinforcing the feedback loop.

I think you see where I’m going with this.  The day-to-day interactions that a business used to have with its suppliers, partners, and customers in physical space – trading securities, coordinating product development, placing an order, paying the bills – have been increasingly mapped onto a virtual landscape over the last several decades.  Datacenters are the new cities.  Equinix’s critical competitive advantage, what separates it from being a commodity lessor of power and space, resides in the network effects spawned by connectivity among a dense and diverse tenant base within its 180+ data centers.  You might also cite the time and cost of permitting and constructing a datacenter as an entry barrier, and this might be a more valid one in Europe than in the US, but I think it’s largely besides the point.  The real moat comes from convincing carriers to plug into your datacenter and spinning up an ecosystem of connecting networks on top.

The roots of this moat extend all the back to the late ’90s, when major telecom carriers embedded their network backbones into datacenters owned by Interxion, Telx [acquired by Digital Realty in October 2015], and Equinix, creating the conditions for network effects to blossom over the ensuing decade+: a customer will choose the interconnection exchange on which it can peer with many other relevant customers, partners, and service providers; carriers and service providers, in virtuous fashion, will connect to the exchange that supports a critical mass of content providers and enterprises.  Furthermore, each incremental datacenter that Equinix or Interxion builds is both strengthened by and reinforces the existing web of connected participants in current datacenters on campus, creating what are known as “communities of interest” among related companies, like this (from Interxion):

[The bottom layer, the connectivity providers, used to comprise ~80% of INXN’s revenue in the early 2000s]

So, for instance, and I’m just making this up, inside an Interxion datacenter, Netflix can manage part of its content library and track user engagement by cross-connecting with AWS, and distribute that content with a high degree of reliability across Europe by syncing with any number of connectivity providers in the bottom layer.  In major European financial centers, where Interxion’s datacenter campuses host financial services constituents, a broker who requires no/low latency trade execution and data feeds can, at little or no cost, cross-connect with trading venues and providers of market data who are located on the same Interxion campuses.  Or consider all the parties involved in an electronic payments transaction, from processors to banks to application providers to wireless carriers, who must all trade traffic in real time.  These ecosystems of mutually reinforcing entities have been fostered over nearly 20 years and are difficult to replicate.  Customers rely on these datacenters for mission critical network access, making them very sticky, as is evidenced by Equnix’s MRR churn rate of ~2%-2.5%.

[Here’s a cool visual from Equinix’s Analyst Day that shows how dramatically its Chicago metro’s cross-connects have proliferated over the last 6 years, testifying to the network effects at play.  Note how thick the fibers on the “Network” cell wall are.  Connectivity providers are the key.]

The carrier-rich retail colocation datacenters that I refer to in this post differ from their more commodified “wholesale” cousins in that the latter cater to large enterprises that lease entire facilities, design and construct their architectures, and employ their own technical support staff.  Retail datacenters with internet exchanges, meanwhile, are occupied by smaller customers who lease by the cabinet, share pre-configured space with other customers, and rely on the DC’s staff for tech support.  But most critically, because wholesale customers primarily use DCs for space and power rather than connectivity, they do not benefit from the same network effects that underpin the connectivity-rich colo moat.  It is the aggregation function that gives rise to a fragmented customer base of enterprises, cloud and internet service providers, and system integrators (Equinix’s largest customer accounts for less than 3% of monthly recurring revenue) and allows the IX colo to persistently implement price hikes that at least keep up with inflation.

This is not the case for a wholesale DC provider, who relies on a few large enterprises that wield negotiating leverage over them.  DuPont Fabros’ largest customer is over 25% of revenue; it’s second largest accounts for another 20%.  A simple way to see the value differentiation between commodity wholesale and carrier-rich retail data center operators is to simply compare their returns on gross PP&E over time.

EBITDA / BoP Gross PPE
Avg
2011-2016
Wholesale
DuPont Fabros9.0%
Digital Realty10.1%
IX Retail
Equinix16.7%
Interxion15.1%

[There are also retail colos without internet exchange points that deliver more value for their customers than wholesale DCs but less compared to their IX retail brethren, and some DCs operate a hybrid wholesale/retail model as well.  It’s a spectrum]  

So you can see why wholesale DCs have been trying to break into the IX gambit organically, with little success, for years.  Digital Realty, which today gets ~15% of its revenue from interconnection, bought its way into the space through its acquisition of Telx in October 2015, followed up by its acquisition of a portfolio of DCs from Equinix in July 2016. The secular demand drivers are many…I’m talking about all the trends that have been tirelessly discussed for the last several years: enterprise cloud computing, edge computing, mobile data, internet of things, e-commerce, streaming video content, big data.  These phenomena are only moving in one direction.  We all know this and agree.

But it’s not just about the amount of data that is generated and consumed, but also about how.  The ever hastening pace and competitiveness of business demand that companies have access to applications on whatever device wherever they happen to be; the data generated from their consumption patterns and from the burgeoning thicket of IoT devices need to, in turn, be shot back to data center nodes for analysis and insight.  And the transfer of data to and from end users and devices to the datacenters needs to happen quickly and cheaply.  Today, the typical network topology looks something like this…

…a hub-and-spoke model where an application traverses great lengths from a core datacenter located somewhere in the sticks to reach end users and data is then backhauled from the end user back to the core.  This is expensive, bandwidth-taxing, slow, and because it is pushed over the public internet, sometimes in technical violation of strict security and privacy protocols.  You can imagine how much data a sensor-laden self-driving car generates every minute and how unacceptably long it would take and how expensive it would be to continuously transfer it all back to the core over a 4G network.  Instead, the IT network should be reconfigured to instead look like this…

…a widely-distributed footprint of nodes close to end user/device, each node hosting a rich ecosystem of networks and cloud partners that other networks and cloud partners care about, pushing and pulling bits to and from users more securely and with far less latency vs. the hub/spoke configuration.  Microsoft clearly shares the same vision.  Satya Nadella on MSFT’s fiscal 4q conference call:

“So to me that’s what we are building to. It’s actually a big architectural shift from thinking purely of this as a migration to some public cloud to really thinking of this as a real future distributed computing infrastructure and applications…from a forward looking perspective I want us to be very, very clear that we anticipate the edge to be actually one of the more exciting parts of what’s happening with our infrastructure.”

Something to consider is that while distributed computing appears to offer tailwind for the IX colos, it can have existential whiffs when pushed to the extreme.  Is it really the long-term secular trend that EQIX management unequivocally proclaims it to be?  Or is it just a processing pit stop for workloads that are inexorably inching their way further to the edge, to be directly manipulated by increasingly intelligent devices?

Consider that this past summer, Microsoft released Azure IoT Edge, a Windows/Linux solution that enables in-device AI and analytics using the same code running in the cloud.  To draw an example from Microsoft’s Build Developer conference, Sandvik Coromant, a Swedish manufacturer of cutting tools, already has machines on its factory floors that send telemetry to the Azure cloud, where machine learning is applied to the data to predict maintenance needs and trigger preemptive machine shutdowns when certain parameters are tripped.  But with Azure IoT Edge, that logic, and a whole menu of others that used to reside solely in the cloud, can now be ported directly to the machines themselves.  The process loop – sending telemetry from the device to the cloud, analyzing it, and shooting it back down to the device – is obviated, cutting the time to decommission a faulty machine from 2 seconds down to ~100 milliseconds.  While this seems like the cloud node is rendered inert, note that the algorithms are still developed and tested in the data center before being exported to and executed on the device…and even as in-device local AI becomes more sophisticated, the data deluge from burgeoning end nodes will still need to be synced to a centralized processing repository to more intensively train machine learning algorithms and generate predictive insights that are more expansive than can be derived locally.

But there is also the fear that as enterprises consider moving workloads off-premise, they bypass hybrid [public + private colocated or on-premise cloud services] and host mostly or entirely with a public hyperscale vendor (AWS, Azure, Google Cloud) [a colocated enterprise brings and maintains its own equipment to the datacenter, whereas a public cloud customer uses the equipment of the cloud provider] or that current hybrid enterprises migrate more and more workloads to the public cloud…or that public cloud vendors build out their own network nodes to host hybrid enterprises.  But by all accounts, Equinix is in deep, mutually beneficial partnership with Cloud & IT services customers (AWS, Google, Azure, Box, SaaS companies), who have been the most significant contributors to Equinix’s monthly recurring revenue (MRR) growth over the last several years.  The hyperscalers are relying on connectivity-rich colos like Equinix and Interxion to serve as their network nodes to meet latency needs on the edge.

There are 50 or so undersea cable initiatives in the world today that are being constructed to meet the proliferating amount of cross-border internet traffic, which has grown by 45x over the last decade.  These subsea projects are being funded not by telecom networks as in days past, but by the major public cloud vendors and Facebook, who are landing many of those cables directly on third party interconnection-rich colos that host their web services.

map2

[Source: TeleGeography]

Cloud & IT customers comprise half the Equinix’s top 10 customers by monthly recurring revenue (MRR), operate across all three of the company’s regions (America, EMEA, and APAC) in, on average, 40 of its datacenters [compared to 4 of the top 10 operating in fewer than 30 datacenters, on average, just a year ago].  The number of customers and deployments on Equinix’s Performance Hub, where enterprises can cross-connect to the public clouds and operate their private cloud in hybrid fashion, has grown by 2x-3x since 1q15, while 50%+ growth in cross-connects to cloud services has underpinned 20% and 14% recurring revenue CAGRs for Enteprise and Cloud customers, respectively, over the last 3 years.

Still another possible risk factor was trumpeted with great fanfare during CNBC’s Delivering Alpha conference last month by Social Capital’s Chamath Palihapitiya, who claimed that Google was developing a chip that could run half of its computing on 10% of the silicon, leading him to conclude that: “We can literally take a rack of servers that can basically replace seven or eight data centers and park it, drive it in an RV and park it beside a data center. Plug it into some air conditioning and power and it will take those data centers out of business.” 

While this sounds like your standard casually provocative and contrived sound-bite from yet another SV thought leader, it was taken seriously enough to spark a sell-off in data center stocks and put the management teams of those companies on defense, with Digital Realty’s head of IR remarking to Data Center Knowledge:

“Andy Power and I are in New York, meeting with our largest institutional investors, and this topic has come up as basically the first question every single meeting.”  

To state the obvious, when evaluating an existential claim that is predicated upon extrapolating a current trend, it’s often worth asking whether there is evidence of said trend’s impact today.  For instance, the assertion that “intensifying e-commerce adoption will drive huge swaths of malls into extinction”, while bold, is at least hinted at by moribund foot traffic at malls and negative comps at mall-based specialty retailers over the last several years.  Similarly, if it is indeed true that greater chip processing efficiency will dramatically reduce data center tenancy, it seems we should already be seeing this in the data, as Moore’s law has reliably held since it was first articulated in the 1970s, and server chips are far denser and more powerful today than they were 5-10 years ago.  And yet, we see just the opposite.

Facebook, Microsoft, Alphabet, and Amazon are all accelerating their investments in datacenters in the coming years – opening new ones, expanding existing ones – and entering into long-term lease agreements with both wholesale and connectivity colo datacenter operators.  Even as colocation operators have poured substantial sums into growth capex, utilization rates have trekked higher.  Unit sales of Intel’s datacenter chips have increased by high-single digits per year over the last several years, suggesting that the neural networking chips that CP referred to are working alongside CPU servers, not replacing them.

It seems a core assumption to CP’s argument is that the amount of data generated and consumed is invariant to efficiency gains in computing.  But cases to the contrary – where efficiency gains, in reducing the cost of consumption, have actually spurred more consumption and nullified the energy savings – are prevalent enough in the history of technological progress that they go by a name, “Jevons paradox”, described in this The New Yorker article from December 2010 [5]:

In a paper published in 1998, the Yale economist William D. Nordhaus estimated the cost of lighting throughout human history.  An ancient Babylonian, he calculated, needed to work more than forty-one hours to acquire enough lamp oil to provide a thousand lumen-hours of light—the equivalent of a seventy-five-watt incandescent bulb burning for about an hour. Thirty-five hundred years later, a contemporary of Thomas Jefferson’s could buy the same amount of illumination, in the form of tallow candles, by working for about five hours and twenty minutes. By 1992, an average American, with access to compact fluorescents, could do the same in less than half a second. Increasing the energy efficiency of illumination is nothing new; improved lighting has been “a lunch you’re paid to eat” ever since humans upgraded from cave fires (fifty-eight hours of labor for our early Stone Age ancestors). Yet our efficiency gains haven’t reduced the energy we expend on illumination or shrunk our energy consumption over all. On the contrary, we now generate light so extravagantly that darkness itself is spoken of as an endangered natural resource.

Modern air-conditioners, like modern refrigerators, are vastly more energy efficient than their mid-twentieth-century predecessors—in both cases, partly because of tighter standards established by the Department of Energy. But that efficiency has driven down their cost of operation, and manufacturing efficiencies and market growth have driven down the cost of production, to such an extent that the ownership percentage of 1960 has now flipped: by 2005, according to the Energy Information Administration, eighty-four per cent of all U.S. homes had air-conditioning, and most of it was central. Stan Cox, who is the author of the recent book “Losing Our Cool,” told me that, between 1993 and 2005, “the energy efficiency of residential air-conditioning equipment improved twenty-eight per cent, but energy consumption for A.C. by the average air-conditioned household rose thirty-seven per cent.”

And the “paradox” certainly seems apparent in the case of server capacity and processing speed, where advances have continuously accommodated ever growing use cases that have sparked growth in overall power consumption.  It’s true that GPUs are far more energy efficient to run than CPUs on a per instruction basis, but these chips are enabling far more incremental workloads than were possible before, not simply usurping a fixed quantum of work that was previously being handled by CPUs.

With all this talk around chip speed, it’s easy to forget that the core value proposition offered by connectivity-rich colos like EQIX and INXN is not processing power but rather seamless connectivity to a variety of relevant networks, service providers, customers, and partners in a securely monitored facility with unimpeachable reliability.  When you walk into an Equinix datacenter, you don’t see infinity rooms of servers training machine learning algorithms and hosting streaming sites, but rather cabinets housing huge pieces of switching equipment syncing different networks, and overhead cable trays secured to the ceiling, shielding thousands of different cross-connects.

The importance of connectivity means that the number of connectivity-rich datacenters will trend towards but never converge to a number that optimizes for scale economies alone.  A distributed topology with multiple datacenter per region, as discussed in this post and outlined in this article [6], addresses several problems, including the huge left tail consequences of a single point of failure, the exorbitant cost of interconnect in regions with inefficient last-mile networks, latency, and jurisdictional mandates, especially in Europe, that require local data to remain within geographic borders.  Faster chips do not solve any of these problems.

Incremental returns

An IX data center operator leases property for 10+ years and enters into 3-5 year contracts embedded with 2%-5% price escalators with customers who pay monthly fees for rent, power, and interconnection fees that comprise ~95% of total revenue.  A typical new build can get to be ~80% utilized within 2-5 years and cash flow breakeven inside of 12 months.  During the first two years or so after a datacenter opens, the vast majority of recurring revenue comes from rent.  But as the datacenter fills up with customers and those customers drag more and more of their workloads to the colo and connect with other customers within the same datacenter and across datacenters on the same campus, power and cross-connects represent an ever growing mix of revenue such that in 4-5 years time, they come to comprise the majority of revenue per colo and user.

The cash costs at an Equinix datacenter break down like this:

% of cash operating costs at the datacenter:

Utilities: 35%

Labor: 19%

Rent: 15%

Repairs/Maintenance: 8%

Other: 23%

So, roughly half of the costs – labor, rent, repairs, ~half of “other” – are fixed.

If you include the cash operating costs below the gross profit line [cost of revenue basically represents costs at the datacenter level: rental payments, electricity and bandwidth costs, IBX data center employee salaries (including stock comp), repairs, maintenance, security services.], the consolidated cost structure breaks down like this:

% of cash operating costs of EQIX / % of revenue / mostly fixed or variable in the short-term?

Labor: 40% / 23% / fixed (including stock-based comp)

Power: 20% / 11% / variable

Consumables & other: 19% / 10% / semi-fixed

Rent: 8% / 5% / fixed

Outside services: 7% / 4% / semi-fixed

Maintenance: 6% / 3% / fixed

With ~2/3 of EQIX’s cost structure practically fixed, there’s meaningful operating leverage as datacenters fill up and bustle with activity.  Among Equinix’s 150 IBX datacenters (that is, datacenters with ecosystems of businesses, networks, and service providers), 99 are “stabilized” assets that began operating before 1/1/2016 and are 83% leased up.  There is $5.7bn in gross PP&E tied up in those datacenters which are generating $1.6bn in cash profit after datacenter level stock comp and maintenance capex (~4% of revenue), translating into a 28% pre-tax unlevered return on capital.

Equinix is by far the largest player in an increasingly consolidated industry.  It got that way through a fairly even combination of growth capex and M&A.  The commercial logic to mergers in this space comes not just from cross-selling IX space across a non-overlapping customer base and taking out redundant SG&A, but also in fusing the ecosystems of datacenters located within the same campus or metro, further reinforcing network effects.  For instance, through its acquisition Telecity, Equinix got a bunch of datacenters that were adjacent to its own within Paris, London, Amsterdam, and Frankfurt.  By linking communities across datacenters within the same metros, Equinix is driving greater utilization across the metro as a whole.

While Equinix’s 14% share of the global retail colo + IX market is greater than 2x its next closest peer, if you isolate interconnection colo (the good stuff), the company’s global share is more like 60%-70%.  Furthermore, according to management, half of the next six largest players in below chart are looking to divest their colocation assets, and of the remaining three, two serve a single region and one is mostly a wholesale.

Equinix points to its global footprint as a key competitive advantage, but it’s important to qualify this claim, as too many companies casually and erroneously point to their “global” presence as a moat.  By being spread across multiple continents, you can leverage overhead cost somewhat, offer multi-region bundled pricing to customers, and point to your bigness and brand during the sales process.  Equinix claims that around 85% of its customers reside in multiple metros and 58% in all three regions (Americas, EMEA, APAC)…but a lot of these multi-region relationships were simply manufactured through acquisition and in any case, the presence of one customer in multiple datacenters doesn’t really answer the question that really matters, which is this: does having a connectivity-rich colo in, say, New York City make it more likely that a customer will choose your colo in, say, Paris (and vice-versa) over a peer who is regionally better positioned and has a superior ecosystem?  I don’t see why it would.  I’m not saying that a global presence is irrelevant, just that housing the customer in one region does not make him inherently captive to you in another.  A customer’s choice of datacenter will primarily be dictated by regional location, connectivity, ecosystem density, and of course, reliability and security.

Which is why I wouldn’t be so quick to conclude that Equinix, by virtue of its global girth, wields an inherent advantage over Interxion, another fine connecity-rich that gets all its revenue from Europe.  Over the years, INXN has been a popular “play” among eventy types hoping for either a multiple re-rating on a potential REIT conversion or thinking that, as a $3.6bn market cap peon next to an acquisitive $36bn EQIX, the company could get bought.  But the company has its fundamental, standalone charms too.

The European colos appear to have learned their lesson from being burned by overexpansion in the early 2000s, and have been careful to let demand drive high-single digit supply growth over the last decade.  As tirelessly expounded in this post, replicating a carrier rich colo from scratch is a near insuperable feat, attesting to why there have been no new significant organic entrants in the pan-European IX data center market for the last 15 years and why customers are incredibly sticky even in the face of persistent price hikes.  European colos are also riding the same secular tailwinds propelling the US market – low latency and high connectivity requirements by B2B cloud and content platforms – though with a ~1-2 year lag.

The combination of favorable supply/demand balance, strong barriers to entry, and a secularly growing demand drivers =

The near entirety of INXN’s growth has been organic too.

Compared to Equinix, Interxion earns somewhat lower returns on gross capital on mature data centers, low-20s vs. ~30%.  I suspect that part of this could be due to the fact that Interxion does not directly benefit from high margin interconnection revenues to the same degree as Equinix.  Interconnect only constitutes 8% of EQIX’s recurring revenue in EMEA vs. nearly 25% in the US.  And cross-connecting in Europe has historically been free or available for a one time fee collected by the colo (although this service is transitioning towards a recurring monthly payment model, which is the status quo in the US).

[INXN has invested over €1bn in infrastructure, land, and equipment to build out the 34 fully data centers it operated at the start of 2016.  Today, with 82% of 900k+ square feet utilized, these data centers generate nearly ~€370mn in revenue and ~€240mn in discretionary cash flow [gross profit less maintenance capex] to the company, a 23% annual pre-tax cash return on investment [up from mid-teens 4 years ago] that will improve further as recurring revenue accretes by high-single digits annually on price increases, capacity utilization, cross-connects, and power consumption.]

But in any case, the returns on incremental datacenter investment are certainly lofty enough to want to avoid the dividend drain that would attend REIT conversion.  Why convert when you can take all your operating cash flow, add a dollop of leverage, and invest it all in projects earning 20%+ returns at scale?  As management recently put it:

“…the idea of sort of being more tactical and as you described sort of let – taking some of that capital and paying a little bit of dividend, to me, that doesn’t smack of actually securing long-term, sustainable shareholder returns.” 

Equinix, on the other hand, must at a minimum pay out ~half its AFFO in dividends, constraining the company’s organic capacity to reinvest, forcing it to persistently issuing debt and stock to fund growth capex and M&A.  Not that EQIX’s operating model – reinvesting half its AFFO, responsibly levering up, earning ~30% incremental returns, and delevering over time – has shareholders hurting.

[AFFO = Adj. EBITDA – stock comp – MCX – taxes – interest]

And there’s still a pretty long runway ahead, for both companies.  Today’s retail colocation and interconnection TAM is around $23bn, split between carrier neutral colos at ~$15bn and bandwidth providers at ~$8bn, the latter growing by ~2%, the former by ~8%.  Equinix’s prediction is that the 8% growth will be juiced a few points by enterprises increasingly adopting hybrid clouds, so call it 10% organic revenue growth, which would be slower than either company has registered the last 5 years.  Layer in the operating leverage and we’re probably talking about low/mid-teens maintenance free cash flow growth.

At 28x AFFO/mFCFE, EQIX and INXN are not statistically cheap stocks.  But it’s no so easy to find companies protected by formidable moats with credible opportunities to reinvest capital at 20%-30% returns for many years.  By comparison, a deep-moater like VRSK [7] is trading at over 30x free cash flow, growing top-line by mid/high single digits, and reinvesting nearly all its prodigious incremental cash flow in share buybacks and gems like Wood Mac and Argus that are unlikely to earn anywhere near those returns.

Notes

INXN claims to be the largest pan-European player in the market, which is technically true but also a bit misleading because in the big 4 European markets (France, Germany, Netherlands, and the UK) that constitute 65% of its business, by my estimate, Interxion still generates less than 1/3 the revenue of Equinix.  Even before the Telecity acquisition in January 2016, EQIX generated more EMEA revenue than Interxion, but now it has more datacenters and across more countries in the region too [well, depending on how you define “region” as the set of countries covered in Equinix’s EMEA is more expansive than that covered by Interxion].

Podcast Blurbs [Larry Summers, Levered restaurant franchisees, Books vs. podcasts, Cramer on MongoDB]

Posted By scuttleblurb On In Podcast Blurbs | Comments Disabled

Freakonomics Radio (9/28/17; Why Larry Summers is the Economist Everyone Hates to Love)

Larry Summers:

(On infrastructure)

“I think we’ve completely mismanaged infrastructure investment in the United States.  It’s nuts that when interest rates are lower than they’ve been at any time in the last 50 years that we’re also investing less net of depreciation…it’s nuts that we have a regulatory apparatus that means that it took far longer to repair a single exit of the Oakland Bay Bridge than it did to build the entire Oakland Bay Bridge two generations ago.  There’s a small bridge across the Charles River that I’m looking at outside my office.  It’s about 300 feet long.  It was under repair with a lane of traffic closed for 5 years.  Julius Caesar built a bridge over the span of the Rhine that was 9x as long in 9 days.  So, both on the quantity of expenditure and on the efficiency of the expenditure and the streamlining of the effort, there’s plenty of room for improvement.”

(On tax repatriation)

“We have $2.5 sitting abroad.  Indulge me if you will in an analogy.  Suppose you ran a library.  Suppose you had a lot of overdue books from your library.  You might decide to give the library amnesty so that people would bring the books home.  You might decide to say that there will never be an amnesty and people better bring the books back because otherwise the fines are going to mount.  But only an idiot would put a sign on the library door saying ‘No amnesty now, thinking about one next month.’  And yet, what have we done as a country?  We’ve said to all those businesses with $2.5tn abroad that if you bring it home right now, you’ll have to pay 35% tax, but we’re talking about and thinking about and planning maybe we’ll have some kind of tax reform where that will come down.”

(On cost disease)

“…the Consumer Price Index for all products are set to be 100 in 1983.  Well, if you look at the CPI for television sets, it’s now about 6.  If you look at the CPI for a day in the hospital room or a year in a college, it’s about 600.  In other words, since 1983, the relative price of this relative measure of education and healthcare as opposed to the TV set has changed by a factor of 100.  That’s got a number of consequences.  One is, since government is more involved in buying education and healthcare than it is in buying TVs, there’s going to be upward pressure on the size of government relative to the rest of the economy.  Another is because there’s been far greater productivity growth in the production of TV sets than in the production of government goods, a larger fraction of the workforce is going to find itself working in the areas where there’s less productivity growth, like education and healthcare…if workers become much more productive doing some things and their wage has to be the same in all sectors, then there’s going to be a tendency for the price in the areas in which labor is not becoming productive, to rise.  And that’s why it costs more to go to the theater relative to when I was a child, that’s why tuition in colleges has risen, that’s why the cost of mental health counseling has risen.”

Grant’s Podcast (10/11/17; Hamburger Helper)

John Hamburger (President of the Restaurant Finance Monitor):

“I’ve been following franchisees since the early ’80s, I actually was a franchisee back in the early ’80s.  Restaurants used to get financed one by one, unit by unit, and you’d buy the land, you’d buy the building, you’d equip it, and you had to finance each of those transactions so you’d have a real estate loan and you’d have an equipment loan.  And it took a long time and it was expensive and you thought really carefully about building restaurants.  Today, the way the deal works is that there is a huge net lease market out there that likes to own restaurant real estate.  So, franchisees over the last 5 to 10 years have decided that they can build more stores, own more stores, buy more stores if they don’t have their capital tied up in real estate.  So franchisees, just like the franchisors, have adopted this quasi asset light model where the real estate in a lot of these franchise restaurant locations are owned by an investor.  And that’s okay, it’s just that when you own real estate, you own it and can borrow against it.  When you lease real estate, the rents go up every year and so over time, it cuts down on the operating margins of the franchisees.

What does it all mean?  It’s allowed franchisees to get bigger.  So you look at Yum Brands or Burger King, they’re relying on fewer and fewer franchisees to operate their system and where risk might possibly come in is that you’ve got fewer and fewer franchisees operating these restaurant around these countries.  I’ve heard Restaurant Brands tell Burger King they’d like to have as few as 50 franchisees in the country.  Well, you take 50 franchisees, you lever them up, it’s not like it was 10-20 years ago where in Burger King you had 600 franchisees and the majority of them owned their own real estate.  It’s a very levered system, the way it’s currently configured. […]

What’s interesting about Carrols (Burger King franchisee), it’s a typical franchisee in the restaurant space, it’s a larger franchisee, it’s ranked around #4 in the country in terms of size, they’re doing a lot of what the large franchisees have done over the last 5 years.  They’ve been able to borrow, they’ve been able to acquire other franchisees and build up their base of restaurants…but other franchisees have grown as well by buying up other franchise restaurants…how do they do that?  They borrow and then they sell their real estate and lease it back, that’s how all of this stuff gets financed and it adds a lot of leverage to the franchisees.

The largest owner of [this real estate] are a number of REITs focused on the restaurant business, there’s Realty Income, there’s Spirit Realty.  There’s also individuals, real estate developers, trusts, and one of the reasons they like owning this kind of real estate generally these leases are triple net, which means taxes, maintenance, and the insurance of those properties are taken care of by the tenant, so the rental income becomes an annuity to the owner, and this is a huge investment area in the US.

If I go back to the ’80s and ’90s, capital in the restaurant business came from the public markets.  We had a run of IPOs in the 1980s and 1990s, that’s where most of the growth capital came from.  Today, you have private equity funds instead of public investors driving the restaurant business.  The public markets have changed where smaller companies…have been unable to go public.  I think the last IPO in the restaurant business was 2015.”

Waking Up (10/6/17; The “After On” Interview)

Sam Harris:

“The numbers [of people who listen to the Waking Up podcast] are really surprising and don’t argue for the health of books, frankly.  A very successful book in hard cover, you are generally very happy to sell 100,000 books in hard cover over the course of the first year before it goes to paperback.  That is very likely going to hit the best-seller list, maybe if you’re a diet book you need to sell more than that, but if you sold 10,000 your first week, you almost certainly have a best seller.  And in the best case, you could sell 200,000 or 300,000 books in hardcover, and that’s a newsworthy achievement.  And there’s the 1/100th of 1% that sell millions of copies.  So, with a book I could reasonably expect to reach 100,000 people in a year and maybe some hundreds of thousands over the course of a decade.  So, all my books together now have sold, I’m pretty sure I haven’t reached 2 million people with those books.  Somewhere between 1 million and 2 million.

But with my podcast, I reach that many people in a day.  And these are long form interviews and sometimes standalone, just me talking about what I think is important to talk about for an hour or two, but often I’m speaking with a very smart guest and we can go very deep on any topic we care about.  And this is not like going on CNN and speaking for 6 minutes in attempted sound bites and you’re gone.  People are really listening in depth. […]

It’s a big commitment to write a book.  Once it’s written, you hand it in to your publisher and it takes 11 months for them to publish it.  Increasingly, that makes less and less sense.  Both the time it takes to do it and the time it takes to publish it don’t compare favorably with podcasting.  In defense of writing, there are certain things that are best done in written form.  Nothing I said has really application to [novels], reading novels is still an experience you want to have, but what I’m doing in non-fiction, that’s primarily argument driven, there are other formats through which to get the argument out.  I still plan to write books because I still love to read books and taking the time to really say something as well as you can effects everything else you do, it effects the stuff you can say extemporaneously in a conversation such as this as well.  So, I still value the process of writing and taking the time to think carefully about things.”

Mad Money w/ Jim Cramer (10/25/17)

Jim Cramer:

“At the heart of every software application there’s a database.  You need to have a system to organize, store, and process your files or none of this stuff works.  So, for software developers, a lot of thought goes into picking the right database and with the rise mobile, social, cloud, datacenter, and IoT, a lot hinges on getting that choice right.  For the longest time, there were just two types.  You had relational databases that are basically unchanged since the 1970s, so developers need to spend a lot of time making sure modern software interacts properly with these rigid database structures from decades ago.  They simply were not designed for the demands of modern software and they certainly weren’t designed for cloud.

Since the turn of the century, though, we’ve seen the rise of non-relational databases which tried to address these shortcomings.  But the problem is that so much runs on old school relational databases, these new non-relational ones are only worth using in a small number of cases.  Then you have non-relational databases that have become more popular recently and are widely used for big data and online applications…basically, they’re more flexible than the traditional model.

On the other hand, MongoDB does something different.  The guys who created this company got frustrated by the available database options on the market, so they built their own platforms designed for developers by developers.  The company has its own offering that they believe combines the best of both relational and non-relational databases. They use a document-based architecture that’s more flexible, easier to scale, more reliable, giving developers the ability to manage their data in a more natural way so they can more rapidly build, deploy, and maintain the software they’re working on.  It works in any environment – the cloud, on premise, or even as some kind of hybrid which has become so popular – but more important, you can use it for a broad range of applications.

To get the word out, MongoDB offers a free, stripped down version of their database platform.  You can just download it right off the website. That makes it easier for software developers to play around with the thing, if they decide they want advanced features, they can pay for any upgrade…The free version has been downloaded more than 10mn times just in the last year.  Then they sign you up for a subscription and you start paying for the enterprise version or the cloud-based version.  So far, MongoDB has more than 4,300 customers in 85 countries including some really well-known names, they’ve got Barclays, ADP, Morgan Stanley, Astrazenca, Genentech, and bunch of government agencies […]

How’s it work?  Let’s use the example of Barclays.  Like most financials, Barclays has invested a ton of money going digital in recent years.  But the rigid nature of old school database technology made it really hard for them to add new online features that customers could use.  And as more customers embrace mobile banking, the cost of running the mainframe kept rising, so Barclays brought in MongoDB in 2012 and it gave them significant performance improvements and a resilient system and major cost savings and the company’s in-house software developers have a much easier time developing features for digital banking.

How about the numbers?  Well, MongoDB’s revenue growth has slowed a tad so far in 2017 but it’s still very rapid, up 51% y/y and that’s not much a deceleration from 55% in 2016.  The customer base is growing like a weed.  At the end of July, they had 4,300 customers, up from 3,200 in January and 1,700 at the beginning of 2016.  And a total of 1,900 of these users come from MongoDB’s cloud offering even though it only came out in the summer of last year.  The company has 71% gross margins, which have improved steadily over the last couple of years.  But like many newly minted tech IPOs, MongoDB is not yet profitable…the key here is annual recurring revenue (ARR) because remember they use a SaaS business model and then the contribution margin…in 2015, customers who signed up that year generated $11.5mn in ARR but they racked up $24.3mn in associated costs, meaning the contribution margin was negative.  By 2016 though, that same group of customers who signed up in 2015 generated $12.8mn in RR but costs only came in at $5.2mn, makes sense these guys had already been signed up, giving MDB a 59% contribution margin.  2017, it’s risen to 60%.  Basically, as time goes on, customers become more and more lucrative because the real expense is signing them up.

Now there’s really just one thing that concerns me other than lack of profitability.  Mongo competes with some real titans of the industry, I’m talking IBM, Microsoft, Oracle, AWS, Google Cloud, Azure for more modern databases.