Economics

Why do people knowingly follow bad investment strategies?

I won’t ask (in this post) about why people hold foolish beliefs about investment strategies. I’ll focus on people who intend to follow a decent strategy, and fail. I’ll illustrate this with a stereotype from a behavioral economist (Procrastination in Preparing for Retirement):[1]

For instance, one of the authors has kept an average of over $20,000 in his checking account over the last 10 years, despite earning an average of less than 1% interest on this account and having easy access to very liquid alternative investments earning much more.

A more mundane example is a person who holds most of their wealth in stock of a single company, for reasons of historical accident (they acquired it via employee stock options or inheritance), but admits to preferring a more diversified portfolio.

An example from my life is that, until this year, I often borrowed money from Schwab to buy stock, when I could have borrowed at lower rates in my Interactive Brokers account to do the same thing. (Partly due to habits that I developed while carelessly unaware of the difference in rates; partly due to a number of trivial inconveniences).

Behavioral economists are somewhat correct to attribute such mistakes to questionable time discounting. But I see more patterns than such a model can explain (e.g. people procrastinate more over some decisions (whether to make a “boring” trade) than others (whether to read news about investments)).[2]

Instead, I use CFAR-style models that focus on conflicting motives of different agents within our minds.

Continue Reading

One of most important assumptions in The Age of Ems is that non-em AGI will take a long time to develop.

1.

Scott Alexander at SlateStarCodex complains that Robin rejects survey data that uses validated techniques, and instead uses informal surveys whose results better fit Robin’s biases [1]. Robin clearly explains one reason why he does that: to get the outside view of experts.

Whose approach to avoiding bias is better?

  • Minimizing sampling error and carefully documenting one’s sampling technique are two of the most widely used criteria to distinguish science from wishful thinking.
  • Errors due to ignoring the outside view have been documented to be large, yet forecasters are reluctant to use the outside view.

So I rechecked advice from forecasting experts such as Philip Tetlock and Nate Silver, and the clear answer I got was … that was the wrong question.

Tetlock and Silver mostly focus on attitudes that are better captured by the advice to be a fox, not a hedgehog.

The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement.

Tetlock’s commandment number 3 says “Strike the right balance between inside and outside views”. Neither Tetlock or Silver offer hope that either more rigorous sampling of experts or dogmatically choosing the outside view over the inside view help us win a forecasting contest.

So instead of asking who is right, we should be glad to have two approaches to ponder, and should want more. (Robin only uses one approach for quantifying the time to non-em AGI, but is more fox-like when giving qualitative arguments against fast AGI progress).

2.

What Robin downplays is that there’s no consensus of the experts on whom he relies, not even about whether progress is steady, accelerating, or decelerating.

Robin uses the median expert estimate of progress in various AI subfields. This makes sense if AI progress depends on success in many subfields. It makes less sense if success in one subfield can make the other subfields obsolete. If “subfield” means a guess about what strategy best leads to intelligence, then I expect the median subfield to be rendered obsolete by a small number of good subfields [2]. If “subfield” refers to a subset of tasks that AI needs to solve (e.g. vision, or natural language processing), then it seems reasonable to look at the median (and I can imagine that slower subfields matter more). Robin appears to use both meanings of “subfield”, with fairly similar results for each, so it’s somewhat plausible that the median is informative.

3.

Scott also complains that Robin downplays the importance of research spending while citing only a paper dealing with government funding of agricultural research. But Robin also cites another paper (Ulku 2004), which covers total R&D expenditures in 30 countries (versus 16 countries in the paper that Scott cites) [3].

4.

Robin claims that AI progress will slow (relative to economic growth) due to slowing hardware progress and reduced dependence on innovation. Even if I accept Robin’s claims about these factors, I have trouble believing that AI progress will slow.

I expect higher em IQ will be one factor that speeds up AI progress. Garrett Jones suggests that a 40 IQ point increase in intelligence causes a 50% increase in a country’s productivity. I presume that AI researcher productivity is more sensitive to IQ than is, say, truck driver productivity. So it seems fairly plausible to imagine that increased em IQ will cause more than a factor of two increase in the rate of AI progress. (Robin downplays the effects of IQ in contexts where a factor of two wouldn’t much affect his analysis; he appears to ignore them in this context).

I expect that other advantages of ems will contribute additional speedups – maybe ems who work on AI will run relatively fast, maybe good training/testing data will be relatively cheap to create, or maybe knowledge from experimenting on ems will better guide AI research.

5.

Robin’s arguments against an intelligence explosion are weaker than they appear. I mostly agree with those arguments, but I want to discourage people from having strong confidence in them.

The most suspicious of those arguments is that gains in software algorithmic efficiency “remain surprisingly close to the rate at which hardware costs have fallen. This suggests that algorithmic gains have been enabled by hardware gains”. He cites only (Grace 2013) in support of this. That paper doesn’t comment on whether hardware changes enable software changes. The evidence seems equally consistent with that or with the hypothesis that both are independently caused by some underlying factor. I’d say there’s less than a 50% chance that Robin is correct about this claim.

Robin lists 14 other reasons for doubting there will be an intelligence explosion: two claims about AI history (no citations), eight claims about human intelligence (one citation), and four about what causes progress in research (with the two citations mentioned earlier). Most of those 14 claims are probably true, but it’s tricky to evaluate their relevance.

Conclusion

I’d say there’s maybe a 15% chance that Robin is basically right about the timing of non-em AI given his assumptions about ems. His book is still pretty valuable if an em-dominated world lasts for even one subjective decade before something stranger happens. And “something stranger happens” doesn’t necessarily mean his analysis becomes obsolete.

Footnotes

[1] – I can’t find any SlateStarCodex complaint about Bostrom doing something in Superintelligence that’s similar to what Scott accuses Robin of, when Bostrom’s survey of experts shows an expected time of decades for human-level AI to become superintelligent. Bostrom wants to focus on a much faster takeoff scenario, and disagrees with the experts, without identifying reasons for thinking his approach reduces biases.

[2] – One example is that genetic algorithms are looking fairly obsolete compared to neural nets, now that they’re being compared on bigger problems than when genetic algorithms were trendy.

Robin wants to avoid biases from recent AI fads by looking at subfields as they were defined 20 years ago. Some recent changes in AI are fads, but some are increased wisdom. I expect many subfields to be dead ends, given how immature AI was 20 years ago (and may still be today).

[3] – Scott quotes from one of three places that Robin mentions this subject (an example of redundancy that is quite rare in the book), and that’s the one place out of three where Robin neglects to cite (Ulku 2004). Age of Em is the kind of book where it’s easy to overlook something important like that if you don’t read it more carefully than you’d read a normal book.

I tried comparing (Ulku 2004) to the OECD paper that Scott cites, and failed to figure out whether they disagree. The OECD paper is probably consistent with Robin’s “less than proportionate increases” claim that Scott quotes. But Scott’s doubts are partly about Robin’s bolder prediction that AI progress will slow down, and academic papers don’t help much in evaluating that prediction.

If you’re tempted to evaluate how well the Ulku paper supports Robin’s views, beware that this quote is one of its easier to understand parts:

In addition, while our analysis lends support for endogenous growth theories in that it confirms a significant relationship between R&D stock and innovation, and between innovation and per capita GDP, it lacks the evidence for constant returns to innovation in terms of R&D stock. This implies that R&D models are not able to explain sustainable economic growth, i.e. they are not fully endogenous.

Book review: The Age of Em: Work, Love and Life when Robots Rule the Earth, by Robin Hanson.

This book analyzes a possible future era when software emulations of humans (ems) dominate the world economy. It is too conservative to tackle longer-term prospects for eras when more unusual intelligent beings may dominate the world.

Hanson repeatedly tackles questions that scare away mainstream academics, and gives relatively ordinary answers (guided as much as possible by relatively standard, but often obscure, parts of the academic literature).

Assumptions

Hanson’s scenario relies on a few moderately controversial assumptions. The assumptions which I find most uncertain are related to human-level intelligence being hard to understand (because it requires complex systems), enough so that ems will experience many subjective centuries before artificial intelligence is built from scratch. For similar reasons, ems are opaque enough that it will be quite a while before they can be re-engineered to be dramatically different.

Hanson is willing to allow that ems can be tweaked somewhat quickly to produce moderate enhancements (at most doubling IQ) before reaching diminishing returns. He gives somewhat plausible reasons for believing this will only have small effects on his analysis. But few skeptics will be convinced.

Some will focus on potential trillions of dollars worth of benefits that higher IQs might produce, but that wealth would not much change Hanson’s analysis.

Others will prefer an inside view analysis which focuses on the chance that higher IQs will better enable us to handle risks of superintelligent software. Hanson’s analysis implies we should treat that as an unlikely scenario, but doesn’t say what we should do about modest probabilities of huge risks.

Another way that Hanson’s assumptions could be partly wrong is if tweaking the intelligence of emulated Bonobos produces super-human entities. That seems to only require small changes to his assumptions about how tweakable human-like brains are. But such a scenario is likely harder to analyze than Hanson’s scenario, and it probably makes more sense to understand Hanson’s scenario first.

Wealth

Wages in this scenario are somewhat close to subsistence levels. Ems have some ability to restrain wage competition, but less than they want. Does that mean wages are 50% above subsistence levels, or 1%? Hanson hints at the former. The difference feels important to me. I’m concerned that sound-bite versions of book will obscure the difference.

Hanson claims that “wealth per em will fall greatly”. It would be possible to construct a measure by which ems are less wealthy than humans are today. But I expect it will be at least as plausible to use a measure under which ems are rich compared to humans of today, but have high living expenses. I don’t believe there’s any objective unit of value that will falsify one of those perspectives [1].

Style / Organization

The style is more like a reference book than a story or an attempt to persuade us of one big conclusion. Most chapters (except for a few at the start and end) can be read in any order. If the section on physics causes you to doubt whether the book matters, skip to chapter 12 (labor), and return to the physics section later.

The style is very concise. Hanson rarely repeats a point, so understanding him requires more careful attention than with most authors.

It’s odd that the future of democracy gets less than twice as much space as the future of swearing. I’d have preferred that Hanson cut out a few of his less important predictions, to make room for occasional restatements of important ideas.

Many little-known results that are mentioned in the book are relevant to the present, such as: how the pitch of our voice affects how people perceive us, how vacations affect productivity, and how bacteria can affect fluid viscosity.

I was often tempted to say that Hanson sounds overconfident, but he is clearly better than most authors at admitting appropriate degrees of uncertainty. If he devoted much more space to caveats, I’d probably get annoyed at the repetition. So it’s hard to say whether he could have done any better.

Conclusion

Even if we should expect a much less than 50% chance of Hanson’s scenario becoming real, it seems quite valuable to think about how comfortable we should be with it and how we could improve on it.

Footnote

[1] – The difference matters only in one paragraph, where Hanson discusses whether ems deserve charity more than do humans living today. Hanson sounds like he’s claiming ems deserve our charity because they’re poor. Most ems in this scenario are comfortable enough for this to seem wrong.

Hanson might also be hinting that our charity would be effective at increasing the number of happy ems, and that basic utilitarianism says that’s preferable to what we can do by donating to today’s poor. That argument deserves more respect and more detailed analysis.

Book review: The Midas Paradox: Financial Markets, Government Policy Shocks, and the Great Depression, by Scott B Sumner.

This is mostly a history of the two depressions that hit the U.S. in the 1930s: one international depression lasting from late 1929 to early 1933, due almost entirely to problems with an unstable gold exchange standard; quickly followed by a more U.S.-centered depression that was mainly caused by bad labor market policies.

It also contains some valuable history of macroeconomic thought, doing a fairly good job of explaining the popularity of theories that are designed for special cases (such as monetarism and Keynes’ “general” theory).

I was surprised at how much Sumner makes the other books on this subject that I’ve read seem inadequate.
Continue Reading

[See my previous post for context.]

I started out to research and write a post on why I disagreed with Scott Sumner about NGDP targeting, and discovered an important point of agreement: targeting nominal wages forecasts would probably be better than targeting either NGDP or CPI forecasts.

One drawback to targeting something other than CPI forecasts is that we’ve got good market forecasts of the CPI. It’s certainly possible to create markets to forecast other quantities that the Fed might target, but we don’t have a good way of predicting how much time and money those will require.

Problems with NGDP targets

The main long-term drawback to targeting NGDP (or other measures that incorporate the quantity of economic activity) rather than an inflation-like measure is that it’s quite plausible to have large changes in the trend of increasing economic activity.

We could have a large increase in our growth rate due to a technology change such as uploaded minds (ems). NGDP targeting would create unpleasant deflation in that scenario until the Fed figured out how to adjust to new NGDP targets.

I can also imagine a technology-induced slowdown in economic growth, for example: a switch to open-source hardware for things like food and clothing (3-d printers using open-source designs) could replace lots of transactions with free equivalents. That would mean a decline in NGDP without a decline in living standards. NGDP targeting would respond by creating high inflation. (This scenario seems less likely and less dangerous than the prior scenario).

Basil Halperin has some historical examples where NGDP targeting would have produced similar problems.

Problems with inflation forecasts?

Critics of inflation targeting point to problems associated with oil shocks or with strange ways of calculating housing costs. Those cause many inflation measures to temporarily diverge from what I want the Fed to focus on, which is the problem of sticky wages interacting with weak nominal wages to create unnecessary unemployment.

Those problems with measuring inflation are serious if the Fed uses inflation that has already happened or uses forecasts of inflation that extend only a few months into the future.

Instead, I recommend using multi-year CPI forecasts based on several different time periods (e.g. in the 2 to 10 year range), and possibly forecasts for time periods that start a year or so in the future (this series shows how to infer such forecasts from existing markets). In the rare case where forecasts for different time periods say conflicting things about whether the Fed is too tight or loose, I’d encourage the Fed to use its judgment about which to follow.

The multi-year forecasts have historically shown only small reactions to phenomena such as the large spike in oil prices in mid 2008. I expect that pattern to continue: commodity price spikes happen when markets get evidence of their causes/symptoms (due to market efficiency), not at predictable future times. The multi-year forecasts typically tell us mainly whether the Fed will persistently miss its target.

Won’t using those long-term forecasts enable the Fed to make mistakes that it corrects (or over-corrects) for shorter time periods? Technically yes, but that doesn’t mean the Fed has a practical way to do that. It’s much easier for the Fed to hit its target if demand for money is predictable. Demand for money is more predictable if the value of money is more predictable. That’s one reason why long-term stability of inflation (or of wages or NGDP) implies short-term stability.

It would be a bit safer to target nominal wage rate forecasts rather than CPI forecasts if we had equally good markets forecasting both. But I expect it to be easier to convince the public to trust markets that are heavily traded for other reasons, than it is to get them to trust a brand new market of uncertain liquidity.

NGDP targeting has been gaining popularity recently. But targeting market-based inflation forecasts will be about as good under most conditions [1], and we have good markets that forecast the U.S. inflation rate [2].

Those forecasts have a track record that starts in 2003. The track record seems quite consistent with my impressions about when the Fed should have adopted a more inflationary policy (to promote growth and to get inflation expectations up to 2% [3]) and when it should have adopted a less inflationary policy (to avoid fueling the housing bubble). It’s probably a bit controversial to say that the Fed should have had a less inflationary policy from February through July or August of 2008. But my impression (from reading the stock market) is that NGDP futures would have said roughly the same thing. The inflation forecasts sent a clear signal starting in very early September 2008 that Fed policy was too tight, and that’s about when other forms of hindsight switch from muddled to saying clearly that Fed policy was dangerously tight.

Why do I mention this now? The inflation forecast dropped below 1 percent two weeks ago for the first time since May 2008. So the Fed’s stated policies conflict with what a more reputable source of information says the Fed will accomplish. This looks like what we’d see if the Fed was in the process of causing a mild recession to prevent an imaginary increase in inflation.

What does the Fed think it’s doing?

  • It might be relying on interest rates to estimate what it’s policies will produce. Interest rates this low after 6.5 years of economic expansion resemble historical examples of loose monetary policy more than they resemble the stereotype of tight monetary policy [4].
  • The Fed could be following a version of the Taylor Rule. Given standard guesses about the output gap and equilibrium real interest rate [5], the Taylor Rule says interest rates ought to be rising now. The Taylor Rule has usually been at least as good as actual Fed policy at targeting inflation indirectly through targeting interest rates. But that doesn’t explain why the Fed targets interest rates when that conflicts with targeting market forecasts of inflation.
  • The Fed could be influenced by status quo bias: interest rates and unemployment are familiar types of evidence to use, whereas unbiased inflation forecasts are slightly novel.
  • Could the Fed be reacting to money supply growth? Not in any obvious way: the monetary base stopped growing about two years ago, M1 and MZM growth are slowing slightly, and M2 accelerated recently (but only after much of the Fed’s tightening).

Scott Sumner’s rants against reasoning from interest rates explain why the Fed ought to be embarrassed to use interest rates to figure out whether Fed policy is loose or tight.

Yet some institutional incentives encourage the Fed to target interest rates rather than predicted inflation. It feels like an appropriate use of high-status labor to set interest rates once every few weeks based on new discussion of expert wisdom. Switching to more or less mechanical responses to routine bond price changes would undercut much of the reason for believing that the Fed’s leaders are doing high-status work.

The news media storytellers would have trouble finding entertaining ways of reporting adjustments that consisted of small hourly responses to bond market changes. Whereas decisions made a few times per year are uncommon enough to be genuinely newsworthy. And meetings where hawks struggle against doves fit our instinctive stereotype for important news better than following a rule does. So I see little hope that storytellers will want to abandon their focus on interest rates. Do the Fed governors follow the storytellers closely enough that the storytellers’ attention strongly affects the Fed’s attention? Would we be better off if we could ban the Fed from seeing any source of daily stories?

Do any other interest groups prefer stable interest rates over stable inflation rates? I expect a wide range of preferences among Wall Street firms, but I’m unaware which preferences are dominant there.

Consumers presumably prefer that their banks, credit cards, etc have predictable interest rates. But I’m skeptical that the Fed feels much pressure to satisfy those preferences.

We need to fight those pressures by laughing at people who claim that the Fed is easing when markets predict below-target inflation (as in the fall of 2008) or that the Fed is tightening when markets predict above-target inflation (e.g. much of 2004).

P.S. – The risk-reward ratio for the stock market today is much worse than normal. I’m not as bearish as I was in October 2008, but I’ve positioned myself much more cautiously than normal.

Notes:

[1] – They appear to produce nearly identical advice under most conditions that the U.S. has experienced recently.

I expect inflation targeting to be modestly safer than NGDP targeting. I may get around to explaining my reasons for that in a separate post.

[2] – The link above gives daily forecasts of the 5 year CPI inflation rate. See here for some longer time periods.

The markets used to calculate these forecasts have enough liquidity that it would be hard for critics to claim that they could be manipulated by entities less powerful than the Fed. I expect some critics to claim that anyway.

[3] – I’m accepting the standard assumption that 2% inflation is desirable, in order to keep this post simple. Figuring out the optimal inflation rate is too hard for me to tackle any time soon. A predictable inflation rate is clearly desirable, which creates some benefits to following a standard that many experts agree on.

[4] – providing that you don’t pay much attention to Japan since 1990.

[5] – guesses which are error-prone and, if a more direct way of targeting inflation is feasible, unnecessary. The conflict between the markets’ inflation forecast and the Taylor Rule’s implication that near-zero interest rates would cause inflation to rise suggests that we should doubt those guesses. I’m pretty sure that equilibrium interest rates are lower than the standard assumptions. I don’t know what to believe about the output gap.

I was quite surprised by a paper (The Surprising Alpha From Malkiel’s Monkey and Upside-Down Strategies [PDF] by Robert D. Arnott, Jason Hsu, Vitali Kalesnik, and Phil Tindall) about “inverted” or upside-down[*] versions of some good-looking strategies for better-than-market-cap weighting of index funds.

They show that the inverse of low volatility and fundamental weighting strategies do about as well as or outperform the original strategies. Low volatility index funds still have better Sharpe ratios (risk-adjusted returns) than their inverses.

Their explanation is that most deviations from weighting by market capitalization will benefit from the size effect (small caps outperform large caps), and will also have some tendency to benefit from value effects. Weighting by market capitalization causes an index to have lots of Exxon and Apple stock. Fundamental weighting replaces some of that Apple stock with small companies. Weighting by anything that has little connection to company size (such as volatility) reduces the Exxon and Apple holdings by more than an order of magnitude. Both of those shifts exploit the benefits of investing in small-cap stocks.

Fundamental weighting outperforms most strategies. But inverting those weights adds slightly more than 1% per year to those already good returns. The only way that makes sense to me is if an inverse of market-cap weighting would also outperform fundamental weighting, by investing mostly in the smallest stocks.

They also show you can beat market-capitalization weighted indices by choosing stocks at random (i.e. simulating monkeys throwing darts at the list of companies). This highlights the perversity of weighting by market-caps, as the monkeys can’t beat the simple alternative of investing equal dollar amounts in each company.

This increases my respect for the size effect. I’ve reduced my respect for the benefits of low volatility investments, although the reduced risk they produce is still worth something. That hasn’t much changed my advice for investing in existing etf’s, but it does alter what I hope for in etf’s that will become available in the future.

[*] – They examine two different inverses:

  1. Taking the reciprocal of each stock’s original weight
  2. Taking the max(weight) and subtracting each stock’s original weight

In each case the resulting weights are then normalized to add to 1.

Book review: Masters of the Word: How Media Shaped History, by William J. Bernstein.

This is a history of the world which sometimes focuses on how technology changed communication, and how those changes affected society.

Instead of carefully documenting a few good ideas, he wanders over a wide variety of topics (including too many descriptions of battles and of individual people).

His claims seem mostly correct, but he often failed to convince me that he has good reason for believing them. E.g. when trying to explain why the Soviet economy was inefficient (haven’t enough books explained that already?) he says the “absence of a meaningful price signal proved especially damaging in the labor market”, but supports that by mentioning peculiarities which aren’t clear signs of damage, then describing some blatant waste that wasn’t clearly connected to labor market problems (and without numbers, doesn’t tell us the magnitude of the problems).

I would have preferred that he devote more effort to evaluating the importance of changes in communication to the downfall of the Soviet Union. He documents increased ability of Soviet citizens to get news from sources that their government didn’t control at roughly the time Soviet power weakened. But it’s not obvious how that drove political change. It seems to me that there was an important decrease in the ruthlessness of Soviet rulers that isn’t well explained by communication changes.

I liked his description of affordable printing presses depended on a number of technological advance, suggesting that printing could not easily have arisen at other times or places.

The claim I found most interesting was that the switch from reading aloud to reading silently and the related ability to write alone (as opposed to needed a speaker and a scribe) made it easier to spread seditious and sexual writings due to increased privacy.

Bernstein is optimistic that improved communication technology will have good political effects in the future. I partly agree, but I see more risks than he does (e.g. his like of the democratic features of the Arab Spring aren’t balanced by much concern over the risks of revolutionary violence).

Book review: Fragile by Design: The Political Origins of Banking Crises and Scarce Credit, by Charles W. Calomiris, and Stephen H. Haber.

This book start out with some fairly dull theory, then switches to specific histories of banking in several countries with moderately interesting claims about how differences in which interest groups acquired power influenced the stability of banks.

For much of U.S. history, banks were mostly constrained to a single location, due to farmers who feared banks with many branches would shift their lending elsewhere when local crop failures made local farms risky to loan to. Yet comparing to Canada, where seemingly small political differences led to banks with many branches, it seems clear that U.S. banks were more fragile because of those restrictions, and less competition in the U.S. left consumers with less desirable interest rates.

By the 1980s, improved communications eroded farmers’ ability to tie banks to one locale, so political opposition to multi-branch banks vanished, resulting in a big merger spree. The biggest problem with this merger spree was that the regulators who approved the mergers asked for more loans to risky low-income borrowers. As a result, banks (plus Fannie Mae and Freddie Mac) felt compelled to lower their standards for all borrowers (the book doesn’t explain what problems they would have faced if they had used different standards for loans the regulators pressured them to make).

These stories provide a clear and plausible explanation of why the U.S. has a pattern of banking crises that Canada and a few other well-run countries have almost entirely avoided over the past two centuries. But they suggest the U.S. banking crises should have been more unique among mature democracies than was actually the case.

The authors are overly dismissive of problems that don’t fit their narrative. Commenting on the failure of Citibank, Lehman, AIG, etc to sell more equity in early 2008, they say “Why go to the markets to raise new capital when you are confident that the government is going to bail you out?”. It seems likely bankers would have gotten better terms from the market as long as they didn’t wait until the worst part of the crisis. I’m pretty sure they gave little thought to bailouts, and relied instead on overly complacent expectations for housing prices.

The book has a number of asides that seem as important as their main points, such as claims that Britain’s greater ability to borrow money led to its military power, and its increased need for military manpower drove its expansion of the franchise.