Transfin. | All Signal. No Noise.

Understanding Debt: Difference between 'Good' and 'Bad' Debt

Nikhil Arora

Credit

Today let's talk about one of finance's most feared and mis-understood concepts.   Debt.   Debt forms the back-bone of any economy. It is inescapable. No matter how rich you are or become, in some or other strand of your mortal existence, you would be compelled to deal with it.   Either when you use a credit card, go abroad for higher studies, buy a house, or decide to scale up your company.   Debt is modern society’s engine of growth.   Let back-track for a moment.   How does Debt work? What does it look like? Why does it transpire in the first place? If unchecked, how can it go so wrong?   The concept is straightforward.   Say you want to buy a car selling at Rs 10L. But you don’t wish to pay the entire amount in lumpsum. After all, Rs 10L is a sizeable amount. When you mention your concern to the car salesman, he promptly gives you an alternate. Instead of asking for Rs 10L upfront, he says why don’t you just pay Rs 1L (i.e. only 10% of the car’s selling price) and agree for a payment plan amounting to Rs 18,000 per month…and the car is all yours!   Not bad huh?!   Think about what just happened. For Rs 1L only, and a small monthly pay-out, your favourite Rs 10L car is yours to take home!   Let us assume you agree to this option. Well, congratulations!   For two things:   Firstly: For your new vehicle.   And Second: You just took some Debt (here known as a “car loan”).   What!   Yes. The alternate presented by the car salesman included a mysterious third party i.e. a bank or a financing company, which in effect paid a major share of the lumpsum amount (i.e. Rs 9L) to the car showroom on your behalf. With the remainder Rs 1L coming from you (remember?), the car showroom makes its money on day one, as it would have liked.   The Rs 18,000 per month that you would now shell out, say for the next seven years, would go to the same third party (from whom you effectively “borrowed”) to repay the Rs 9L plus...surprise - surprise...Interest!   The car showroom makes its money upfront. The third party makes its money over next seven years by charging Interest. You get to buy your car, right now.   So, remember, when you “borrow” money…you take Debt.   Why is Debt so Attractive?   Well for starters, it allows you to spend more than your present capability. It allows you to invest and grow. It allows you to consume more. It lets you take home a car by only paying a small part of its total value upfront.   Why is it so Risky?   The fact that you have borrowed money, implies that you need to pay it back. And in most cases, you need to pay back with Interest. And if you don’t pay your dues, you’ll be in trouble.   What Kind of Trouble?   Let us get back to our car example. It has now been almost five months since you bought the car. You’ve made five payments of Rs 18,000 each, all on time. But in the sixth month…say your company starts downsizing…and unfortunately you end up losing your job. You don’t have an income and now the Rs 18,000 per month hurts.   A month passes by…you are unable to find a new job…and end up missing a due payment.   Someone from the bank calls and gives you a stern warning. You’re hopeful that you’ll get back on your feet soon, so end up dishing another Rs 18,000 from your savings, but the bank levies a small penalty this time for the delayed payment.   Another month passes by…you still don’t have a job…and you start panicking. You call the bank and tell them you’re unable to pay them anymore. Your bank account is almost empty. You don’t have any savings. The bank sends a guy who takes away (or “reposes”) your beloved car.   Another month passes by, and amidst all these distractions, you somehow manage to snag a new job. The pay cheques are back, and you are once again at ease. You thank your Stars…thinking the worst is over!   But is it? You now wish to apply for a credit card. The credit card company rejects you. Your health insurance policy is up for renewal, and your premium spikes up. You try to take another car loan, and the Interest on the monthly payment this time is much higher than Rs 18,000 like last time!   What Happened?   Simple. For the banking system – you are now deemed as a risky borrower. Your erstwhile “default” on the car loan turned your good debt into bad. Anytime you need to borrow in the future, the system would remind you of your risky behaviour, either through rejection, or through a higher Interest rate.   This distinction between good debt and bad debt is important. Good debt can easily turn into bad without proper planning or due to unforeseen circumstances.   Debt can do wonders and grant you 'leverage' but chasing too much leverage comes with its own set of risks, costs of which can be far-reaching and far too real.   Debt is a tool that works best when used carefully.   Scratch that, Debt is a tool that ONLY works when used carefully.    (We are now on your favourite messaging app – WhatsApp. We highly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

China in Transition – Will it Grow Old Before it Becomes Rich?

Colin Lloyd

China

China is evolving from a current account surplus to deficit country.   Increased domestic consumption and dis-saving by an ageing population drives the trend Lower investment in developed countries may be assuaged by major central banks Emerging and Frontier markets will struggle to replace China’s long-term investmen   China has slowly seen its current account surplus dwindle. In large part, this change from surplus to deficit has been driven by demographic forces. As a result of China’s previous ‘one child’ policy, enacted in 1979, the country is destined to grow old before it becomes rich, at least by Western standards. The one child policy has now been relaxed but, as yet, only a tiny percentage of parents have applied to have a second child. What does that mean for the prospects for Chinese assets and what might be the implications for the asset markets of its trading partners?   Demographic disaster need not be the end-game for China, writing back in May 2015 for CFR’s Foreign Affairs – China Will Get Rich Before It Grows Old – Baozhen Luo speculated that: –   If China continues to meet its demographic challenge head-on, it might yet be able to grow old and rich at the same time.   During the last 70 years China’s average life expectancy has risen from 35 to 75 whilst its fertility rate has collapsed – it is now lower than France or the US. On current trend, 30% of China’s population will be over 60 by 2050 as the median age rises to 46 years. Nonetheless, this powerful trend has been tempered by rising female labour-force participation and educational improvements which have boosted productivity. Another factor, helping to offset the impact of ageing, is the rising number of beyond retirement age workers. These elements, though significant, remain insufficient to stop China’s working age population declining, according to the National Bureau of Statistics it peaked at 941 million in 2011.   Other emerging economies, especially India and Indonesia, continue to reap the demographic dividend, capturing market share in low-tech, labour intensive industries. Chinese exports have not vanished, however, India’s trade deficit with China hit $62bn in 2017, even in the face of escalating Chinese labour costs. Cogniscent of its dwindling competitive advantage in unskilled labour, China has concentrated in adding value through technological investment to raise productivity further. As developed coastal regions become more high-tech, lower cost manufacturing industries have been relocated inland, capitalising on the continued flow of low-cost, unskilled rural workers, migrating to the cities.   China’s leaders are keen observers of the tide, where the country cannot compete in trade, it has become a prominent investor. Its focus has been on neighbouring countries such as Vietnam, Laos and Cambodia. It has also made significant strategic investments in resource rich, but economically underdeveloped, countries in regions such as Africa.   The demographic problems facing China are substantial. Pension deficits are rising, as are healthcare costs, meanwhile China’s high savings rate is finally beginning to diminish. In Deliotte’s Q3 2017 Insights report – Ageing Tigers, Hidden Dragons – the authors’ discuss a third wave of Asian growth, pointing to India and noting that China’s economic expansion, whilst still the largest contributor to world GDP growth in absolute terms, has been slowing for several years. The chart below shows the steep peak in China’s working age population around 2011; on current trend China’s average age will exceed the US by 2021 and equal that of Japan by 2045:    Source: Deloitte   During the next decade, India’s working-age population will rise by 115mn: more than half of the 225mn expected increase across Asia as a whole. This is not just a Chinese problem, with the notable exception of Indonesia and Philippines, the ageing problem will also beset the rest of Asia, as the chart below makes clear:    Source: Deloitte   As for China growing old before it grows rich, this depends on one’s definition. In a recent note, Craig Botham of Schroders – Does it matter if China gets old before it get rich? Makes the following observations:    Current GDP per capita is a little under $10,000, against $40,000 in Japan, and $60,000 in the US. With its current median age, China is the world’s 67th oldest country, and a median age of 45 would make it the 6th oldest country in the world today, which seems a good criteria for being “old” in absolute terms. China hits this age around 2035. Defining “rich” is also difficult, but taking the poorest Western European economy as an example would require GDP per capita of around $20,000. Reaching $20,000 GDP per capita in real terms by 2035 would require annual income growth of nearly 5%, when growth is at 6.5% and slowing. Raising the income threshold to $30,000 would need annual growth of 8%, and $40,000 (modern Japan) would require 10% growth. Achieving US levels of $60,000 would require a herculean 13% rate. It seems highly likely that China will indeed become old before it becomes rich.   China has far to go – and India even further – before they reach the levels of income of the developed nations, but in China’s case, the rebalancing towards domestic consumption is beginning to transform a country which was once the epitome of a mercantilist exporter of manufactures.   Last week saw the publication of a lead article in the Economist – China may soon run its first annual current-account deficit in decades – as the author writes, the implications will be profound. The chart below shows a country in transition, regardless of its record trade deficit with the US:    Source: The Economist   Ignoring the blip in March 2018, China may run its first current account deficit since 1993 this year. In 2017 the current account surplus was 1.3% GDP, down from 1.8% in 2016. The ratio has been falling steadily from as high of 10% in 2007. To balance the books, the country will need to attract foreign investment; an acceleration in the policy of financial market liberalisation is likely to be embraced. The Economist expresses it like this:    China’s decades of surpluses reflected the fact that for years it saved more than it invested. Thrifty households hoarded cash. The rise of great coastal manufacturing clusters meant exporters earned more revenues than even China could reinvest. But now that has begun to change. Consumers are splashing out on cars, smartphones and designer clothes. Chinese tourists are spending immense sums overseas. As the population grows older the national savings rate will fall further, because more people in retirement will draw down their savings. … China will need to attract net capital inflows… has eased quotas for foreigners buying bonds and shares directly… Pension funds and mutual funds all over the world are considering increasing their exposure to China.   This chart shows the evolution of China’s gross saving rate as a ratio of GDP since 2006:    Source: CEIC   The savings rate may be in decline but capital market liberalisation is not without risks, many Chinese have managed to extract money from China, despite capital controls; the real-estate markets of Australia, Canada and, to a lesser extent, London and New York, bear testament to this trend. Yet, to attract the substantial foreign investment that will be required, it is important for foreigners to feel confident that they can withdraw their investment. The Chinese authorities risk opening the flood gates. Despite the different conclusions they are likely to draw, Malaysia’s imposition of capital controls during the Asian crisis of 1998 remains fresh in the minds of Chinese officials and institutional investors alike.   Foreign Direct investment may be robust but far more will be needed as trade balances switches sign. February saw FDI in manufacturing rise 12%, whilst investment in high-tech manufacturing grew by a less stellar 9.3%.   For the Chinese authorities, another concern with liberalisation is the potential impact on many state owned enterprises. These heavily indebted behemoths are in a parlous position. Root and branch reform will be required to insure they do not precipitate financial, social and political instability.   The rest of the world will not be immune to the impact of China switching from current account surplus to deficit either. After the Japanese stopped investing their surplus earnings abroad the Chinese took up the gauntlet. Now that the flow of investment from China is diminishing, less economically developed countries, such as those of the African sub-continent, will find their infrastructure investments curtailed and their longer-term interest rates rising. The US Treasury will have to rely more heavily on its central bank to fill the void of Chinese investment dollars, but less developed countries will suffer. Other current account surplus countries will fill the investment void, but, unlike China, which has been investing for the long run, they are less likely to buy and hold over such an extended time horizon.   Conclusions and Investment Opportunities   China has spent many years sterilising the effect of its current account surplus by increasing official reserves. A large proportion of these reserves have been invested in US Treasuries, although latterly other developed government bond markets have benefitted from a move towards greater reserve diversification on the part of the Peoples Bank of China (PBoC).   As the current account switches sign from surplus to deficit, China will need to attract greater foreign investment to prevent its currency from declining. Relaxing of capital controls will be necessary to support the growing domestic consumption demand of an ageing populous. The risk for China is of greater domestic instability. They fear a flood of landlocked private savings being sent abroad, whilst inward fickle foreign investment offers a poor substitute, driven, as it is, by the need for short-term performance.   The solution to the PBoC’s capital conundrum is to adopt the policy levers favoured by developed nation central banks. Using quantitative techniques they can manage interest rates across the yield curve while Chinese capital markets steadily transform from a side-show into the main attraction. The transition will be uneven but the needs of a semi-affluent, ageing society are even more pressing than those of the deficit nations of the affluent developed world. Unlike Japan, whose international investments took the form of individual retirement savings, China’s external investments have been largely centrally directed. The needs of China’s ageing society, to draw down on savings, requires that the government replace domestic investment with foreign capital. To attract this capital to state owned enterprises will require more than just the relaxation of capital controls, it will require root and branch reform of these enterprises.   As China moves from an economic model of Communism with beauty spots to Capitalism with warts its outlook will inevitably become shorter-term, making asset markets will more volatile. The influence of its central bank will increase dramatically, as it adopts the policies of its developed nation peers. The overall pattern of international capital flows will become more fickle, to the detriment of less developed countries.   Originally Published in In the Long Run   (We are now on your favourite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

The "New" Indian Economy: Posing A Conundrum for the State, Markets and the Democracy

CUTS International

Macro

There is something about our societies that is changing so rapidly, with conversations on the role of state, market and democracies begining to gain traction of late. To begin with, India was not meant to be a market democracy, if the constitutional spirit is anything to go by. 29 years after our independence i.e. in 1976 we felt the need to formally include ‘socialism’ as an explicit reminder in the Preamble to our Constitution. However, it was only 14 years later i.e. in 1991 that we felt the need to change our practice away from that goal and bring about market liberalisation. 28 years since, the world has thrown at us two grand failures – Failure of Socialism and Failure of Capitalism.   A part of the reason of these failures can be attributed to how the state responded to both these principles of running and managing the economy, not just in India but across the globe. And to make things more complex, the Chinese growth model serves as a rude shock to the idea of democracy around the world (speaking strictly in the economic sense).   In short, what we have gathered is that the relationship between markets and democracies have to be crafted carefully rather than assuming that the latter will automatically facilitate the former. This thought, once considered truism, is now under severe scrutiny. Therefore, the role of the state assumes not only unprecedented significance but also calls for a radical overhaul in its capacity to analyse and act.   This is particularly important in light of the following facts:   First, the fundamentals of ‘industrialism’ - premised upon the relationship between factors of production i.e. land, labour, technology and capital - are being challenged world over. In the new economy facilitated by unprecedented technological intervention, this relationship is increasingly being redefined and the search to disrupt drivers of economic inequality is pre-occupying policy discourse globally. In other words, the new economy is rendering some of these factors increasingly redundant in the quest for higher productivity.     It is therefore not surprising that questions about the fundamental structures of economies are gaining traction. India too is no exception. With a burgeoning labour force coupled with challenges of jobless growth, equity and sustainability, these questions are all too important for India as well.   Second, many of the imperatives for growth and development of India like better health, quality education, affordable and reliable electricity, productive and remunerative agriculture, digital and physical connectivity, sustainable urbanisation, Ease of Doing Business and non-vulnerable employment cannot be addressed without the proactive role of subnational entities.   Further, with many a disruptions catalysed by rapid technological change, traditional business models which catered to provisioning of basic necessities are also going through a rapid change. For instance, large scale utilities have started to focus on decentralised models while ‘individual focus’ is gaining supremacy as technology is increasingly facilitating fulfilment of unique demands in almost all spheres of life.     In other words, what we are seeing is a trend towards hyper decentralisation and fragmentation.   Third - the new data economy is bound to challenge the most basic of all things i.e. Freedom. All democracies, regulations and markets around the world are modelled around this concept.   Synonymous with freedom is individual choice. However, with massive amounts of data being collected and processed by big tech corporations which have become larger than nation states in many ways, the fear is that the individual choice could well be shaped and determined by big tech corporations rather than people. With the idea of ‘people’ being challenged, the idea of democracy too stands challenged.   The combination of these three, is perhaps the biggest challenge being faced by us today. Ironically the state is still cast in the old mould and hence slow to react to these challenges.   An evidence of this fact is that we have not yet accorded the due importance to many of these issues. Conversations on inequality, however, have just started to gain traction.   For instance, new ideas like Universal Basic Income or Universal Basic Capital are being discussed as new redistribution mechanisms but at best they seem to be avoiding some fundamental questions.   The proponents of these propositions are caught in the merits, demerits and design aspects of these solutions without realising that re-distribution strategies have miserably failed in the past. More importantly, focus on these aspects blurs a more important background i.e. why are we talking about them in the first place.   Therefore, fundamental fault lines in the economy need to be understood. They need to be understood to understand that misallocation in any factor market conditions can lead to misallocations across the chain. In a globalised and hyper connected world, such misallocations would be difficult to rein in within any particular geography. For instance, if ‘capital’ is allowed to make unlimited amount of capital, the end result would be the death of capital itself as other capacities and capabilities would remain stalled. In such a scenario, the means to create new capital would eventually diminish.   Therefore, temptations to continue with business as usual must be resisted. In other words, there is no point in aping the growth models practiced by the developed economies. One must understand that it is exactly those growth models that have gotten us into this logjam.   To put it differently, every time a system fails, stimulus of ‘cheap’ (low interest rates) capital is infused. With money coming in cheap, misallocation is easy and the result is excess of capital and excess of labour. The other side of this problem is stranded capacities and high NPAs, which have been plaguing the Indian banks.   To illustrate, one needs to look back at the investment binge between 2003 and 2012. Nothing short of a bubble, this binge was fuelled by excess global liquidity and easy bank credit. This resulted in the Indian businesses adding massive capacities based on over-optimistic domestic estimates and Chinese demand. But with the global and domestic downturn hitting demand, the excesses of that period meant high NPAs for financial institutions and massive debt distress for big industrial houses.   The question is how does one move forward? In a recent paper titled The High Price of Efficiency, published in the Harvard Business Review, Roger Martin of University of Toronto, offers some interesting arguments.   The gist of those arguments is that instead of market efficiency the focus should be on market resilience. This is because rewards from the efficiency get more and more unequal as the efficiency improves. This leads to ever growing market power to the most efficient competitors. The end result is the Pareto distribution of wealth i.e. a highly skewed and unequal distribution of surpluses generated in the economic activity.      In other words, super-efficient dominant model elevates the risk of catastrophic failure which is the most dangerous in new economies where competitive advantage is often tied to network effects, which gives incumbents a powerful boost.   The solutions therefore, as argued by Martin, must entail polices that limit scale, introduce some friction in the path to efficiency, promote patient capital, and those that create better jobs and teaching for resilience.   These prescriptions have eminent merit and are not de-linked from each other. In fact, they are possible only in combination with each other.   For instance, if competition policies can rein in market domination of a kind that crowds out any competition whatsoever, then a lot many enterprises can be created which can ensure a distributed spread of capital. For the growth of these enterprises, some kind of legitimate trade barriers would be acceptable. Such incentives can be given in lieu of promoting long term capital as its value would be greater than short term capital. In other words, it can lead to creation of companies with long term strategies. Such companies are then most likely to invest in human capital for long term productivity. In other words, they will care for workers who are also consumers of their products and have the capacity to buy them. In a nutshell, they will realise that cheap labour is actually more expensive, and last but not the least teaching for such ethics, which Martin describes as ‘Resilience’, should start at management schools which are currently over obsessed with ‘efficiency’ as the ultimate goal.   It is interesting to note that Martin is not alone in this thought. His basic argument resonates with the view put forth by Joseph Schumpeter, regarded as one of the greatest economists of the 20th century who opined that dynamic capitalism was executed to fail because the very efficiency of capitalistic enterprise would lead to monopolistic structures and the disappearance of the entrepreneur.   Therefore, as a solution he emphasised the importance of ‘innovation’ but one that is a process of industrial mutation, that incessantly revolutionises the economic structure from within, incessantly destroying the old one and incessantly creating a new one. He called it the ‘creative destruction’.   In India and globally, the State, the Market and the Democracy should come together for this ‘creative destruction’ to save each other from themselves!!!   About the Authors   Pradeep S. Mehta and Abhishek Kumar work for CUTS International, a global public policy think and action tank.   (We are now on your favorite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Capital Flows – Is A Reckoning Nigh?

Colin Lloyd

Macro

Borrowing in Euros continues to rise even as the rate of US borrowing slows   The BIS has identified an Expansionary Lower Bound for interest rates Developed economies might not be immune to the ELB Demographic deflation will thwart growth for decades to come   In one of my previous articles - In a World of Rising Debt: Who Possesses the Greatest Risk? I looked at the increase in debt globally, however, there has been another trend, since 2009, which is worth investigating as we consider from whence the greatest risk to global growth may hail. The BIS global liquidity indicators at end-September 2018 – released at the end of January, provides an insight:   The annual growth rate of US dollar credit to non-bank borrowers outside the United States slowed down to 3%, compared with its most recent peak of 7% at end-2017. The outstanding stock stood at $11.5 trillion.   In contrast, euro-denominated credit to non-bank borrowers outside the euro area rose by 9% year on year, taking the outstanding stock to €3.2 trillion (equivalent to $3.7 trillion). Euro-denominated credit to non-bank borrowers located in emerging market and developing economies (EMDEs) grew even more strongly, up by 13%.   The chart below shows the slowing rate of US$ credit growth, while euro credit accelerates:    Source: BIS Global Liquidity Indicators   The rising demand for Euro denominated borrowing has been in train since the end of the Great Financial Recession in 2009. Lower interest rates in the Eurozone have been a part of this process; a tendency for the Japanese Yen to rise in times of economic and geopolitical concern has no doubt helped European lenders to gain market share. This trend, however, remains over-shadowed by the sheer size of the US credit markets. The US$ has remained preeminent due to structurally higher interest rates and bond yields than Europe or Japan: investors, rather than borrowers, dictate capital flows.   The EC – Analysis of developments in EU capital flows in the global context from November 2018 concurs:    The euro area (excluding intra-euro area flows) has been since 2013 the world’s leading net exporter of capital. Capital from the euro area has been invested heavily abroad in debt securities, especially in the US, taking advantage of the interest differential between the two jurisdictions. At the same time, foreign holdings of euro-area bonds fell as a result of the European Central Bank’s Asset Purchase Programme.    This bring us to another issue; a country’s ability to service its debt is linked to its GDP growth rate. Since 2009 the US economy has expanded by 34%, over the same period, Europe has shrunk by 2%. Putting these rates of expansion into a global perspective, the last decade has seen China’s economy grow by 139%, whilst India has gained 96%. Recent analysis suggests that Chinese growth may have been overstated by 2% per annum over the past decade, but the pace is still far in excess of developed economy rates. Concern about Chinese debt is not unwarranted, but with GDP rising by 6% per annum, its economy will be 80% larger in a decade, whilst India’s, growing at 7%, will have doubled.   Another excellent research paper from the BIS – The expansionary lower bound: contractionary monetary easing and the trilemma – investigates the problem of monetary tightening of developed economies on emerging markets. Here is part of the introduction, the emphasis is mine:    …policy makers in EMs are often reluctant to lower interest rates during an economic downturn because they fear that, by spurring capital outflows, monetary easing may end up weakening, rather than boosting, aggregate demand.   An empirical analysis of the determinants of policy rates in EMs provides suggestive evidence about the tensions faced by monetary authorities, even in countries with flexible exchange rates.   …The results reveal that, even after controlling for expected inflation and the output gap, monetary authorities in EMs tend to hike policy rates when the VIX or US policy rates increase. This is arguably driven by the desire to limit capital outflows and the depreciation of the exchange rate.   …our theory predicts the existence of an “Expansionary Lower Bound” (ELB) which is an interest rate threshold below which monetary easing becomes contractionary. The ELB constrains the ability of monetary policy to stimulate aggregate demand, placing an upper bound on the level of output achievable through monetary stimulus.   The ELB can occur at positive interest rates and is therefore a potentially tighter constraint for monetary policy than the Zero Lower Bound (ZLB). Furthermore, global monetary and financial conditions affect the ELB and thus the ability of central banks to support the economy through monetary accommodation. A tightening in global monetary and financial conditions leads to an increase in the ELB which in turn can force domestic monetary authorities to increase policy rates in line with the empirical evidence presented…   The BIS research is focussed on emerging economies, but aspects of the ELB are evident elsewhere. The limits of monetary policy are clearly observable in Japan: the Eurozone may be entering a similar twilight zone.   The difference between emerging and developed economies response to a tightening in global monetary conditions is seen in capital flows and exchange rates. Whilst emerging market currencies tend to fall, prompting their central banks to tighten monetary conditions in defence, in developed economies the flow of returning capital from emerging market investments may actually lead to a strengthening of the exchange rate. The persistent strength of the Japanese Yen, despite moribund economic growth over the past two decades, is an example of this phenomenon.   Part of the driving force behind developed market currency strength in response to a tightening of global monetary conditions is demographic, a younger working age population borrows more, an ageing populous borrows less.   At the risk of oversimplification, lower bond yields in developing (and even developed) economies accelerate the process of capital repatriation. Japanese pensioners can hardly rely on JGBs to deliver their retirement income when yields are at the zero bound, they must accept higher risk to achieve a living income, but this makes them more likely to drawdown on investments made elsewhere when uncertainty rises. A 2% rise in US interest rates only helps the eponymous Mrs Watanabe if the Yen appreciates by less than 2% in times of stress. Japan’s pensioners face a dilemma, a fall in US rates, in response to weaker global growth, also creates an income shortfall; capital is still repatriated, simply with less vehemence than during an emerging market crisis. As I said, this is an oversimplification of a vastly more complex system, but the importance of capital flows, in a more polarised ‘risk-on, risk-off’ world, is not to be underestimated.   Returning to the BIS working paper, the authors conclude:    The models highlight a novel inter-temporal trade-off for monetary policy since the level of the ELB is affected by the past monetary stance. Tighter ex-ante monetary conditions tend to lower the ELB and thus create more monetary space to offset possible shocks. This observation has important normative implications since it calls for keeping a somewhat tighter monetary stance when global conditions are supportive to lower the ELB in the future.   Finally, the models have rich implications for the use of alternative policy tools that can be deployed to overcome the ELB and restore monetary transmission. In particular, the presence of the ELB calls for an active use of the central bank’s balance sheet, for example through quantitative easing and foreign exchange intervention. Furthermore, the ELB provides a new rationale for capital controls and macro-prudential policies, as they can be successfully used to relax the tensions between domestic collateral constraints and capital flows. Fiscal policy can also help to overcome the ELB, while forward guidance is ineffective since the ELB increases with the expectation of looser future monetary conditions.   Conclusions and Investment Opportunities   The concept of the ELB is new, the focus of the BIS working paper is on its impact on emerging markets. I believe the same forces are evident in developed economies too, but the capital flows are reversed. For investors, the greatest risk of emerging market investment is posed by currency, however, each devaluation by an emerging economy inexorably weakens the position of developed economies, since the devaluation makes that country’s exports immediately more competitive.   At present the demographic forces favour repatriation during times of crisis and repatriation, at a slower rate, during times of EM currency appreciation. This is because the ageing economies of the developed world continue to drawdown on their investments. At some point this demographic effect will reverse, however, for Japan and the Eurozone this will not be before 2100. For more on the demographic deficit the 2018 Ageing Report: Europe’s population is getting older – is worth reviewing. Until demographic trends reverse, international demand to borrow in US$, Euros and Yen will remain popular. Emerging market countries will pay the occasional price for borrowing cheaply, in the form of currency depreciations.   For Europe and Japan a reckoning may be nigh, but it seems more likely that their economic importance will gradually diminish as emerging economies, with a younger working age population and higher structural growth rates, eclipse them.   Originally Published in In the Long Run   (We are now on your favourite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Central Bank Balance Sheet Reductions – Will Anyone Follow the Fed?

Colin Lloyd

Macro

The next wave of QE will be different, credit spreads will be controlled:   The Federal Reserve may continue to tighten but few other CB’s can follow ECB balance sheet reduction might occur if a crisis does not arrive first Interest rates are likely to remain structurally lower than before 2008   The Federal Reserve’s response to the great financial recession of 2008/2009 was swift by comparison with that of the ECB; the BoJ was reticent, too, due to its already extended balance sheet. Now that the other developed economy central banks have fallen into line, the question which dominates markets is, will other central banks have room to reverse QE?   Last month saw the publication of a working paper from the BIS – Risk endogeneity at the lender/investor-of-last-resort – in which the authors investigate the effect of ECB liquidity provision, during the Euro crisis of 2010/2012. They also speculate about the challenge balance sheet reduction poses to systemic risk. Here is an extract from the non-technical summary (the emphasis is mine):   The Eurosystem’s actions as a large-scale lender- and investor-of-last-resort during the euro area sovereign debt crisis had a first-order impact on the size, composition, and, ultimately, the credit riskiness of its balance sheet. At the time, its policies raised concerns about the central bank taking excessive risks. Particular concern emerged about the materialization of credit risk and its effect on the central bank’s reputation, credibility, independence, and ultimately its ability to steer inflation towards its target of close to but below 2% over the medium term. Against this background, we ask: Can central bank liquidity provision or asset purchases during a liquidity crisis reduce risk in net terms? This could happen if risk taking in one part of the balance sheet (e.g., more asset purchases) de-risks other balance sheet positions (e.g., the collateralized lending portfolio) by a commensurate or even larger amount. How economically important can such risk spillovers be across policy operations? Were the Eurosystem’s financial buffers at all times sufficiently high to match its portfolio tail risks? Finally, did past operations differ in terms of impact per unit of risk?… We focus on three main findings. First, we find that (Lender of last resort) LOLR- and (Investor of last resort) IOLR-implied credit risks are usually negatively related in our sample. Taking risk in one part of the central bank’s balance sheet (e.g., the announcement of asset purchases within the Securities Market Programme – SMP) tended to de-risk other positions (e.g., collateralized lending from previous – longer-term refinancing operations LTROs). Vice versa, the allotment of two large-scale (very long-term refinancing operations) VLTRO credit operations each decreased the one-year-ahead expected shortfall of the SMP asset portfolio. This negative relationship implies that central bank risks can be nonlinear in exposures. In bad times, increasing size increases risk less than proportionally. Conversely, reducing balance sheet size may not reduce total risk by as much as one would expect by linear scaling. Arguably, the documented risk spillovers call for a measured approach towards reducing balance sheet size after a financial crisis. Second, some unconventional policy operations did not add risk to the Eurosystem’s balance sheet in net terms. For example, we find that the initial OMT announcement de-risked the Eurosystem’s balance sheet by e41.4 bn in 99% expected shortfall (ES). As another example, we estimate that the allotment of the first VLTRO increased the overall 99% ES, but only marginally so, by e0.8 bn. Total expected loss decreased, by e1.4 bn. We conclude that, in extreme situations, a central bank can de-risk its balance sheet by doing more, in line with Bagehot’s well-known assertion that occasionally “only the brave plan is the safe plan.” Such risk reductions are not guaranteed, however, and counterexamples exist when risk reductions did not occur. Third, our risk estimates allow us to study past unconventional monetary policies in terms of their ex-post ‘risk efficiency’. Risk efficiency is the notion that a certain amount of expected policy impact should be achieved with a minimum level of additional balance sheet risk. We find that the ECB’s Outright Monetary Transactions – OMT program was particularly risk efficient ex-post since its announcement shifted long-term inflation expectations from deflationary tendencies toward the ECB’s target of close to but below two percent, decreased sovereign benchmark bond yields for stressed euro area countries, while lowering the risk inherent in the central bank’s balance sheet. The first allotment of VLTRO funds appears to have been somewhat more risk-efficient than the second allotment. The SMP, despite its benefits documented elsewhere, does not appear to have been a particularly risk-efficient policy measure.   This BIS research is an important assessment of the effectiveness of ECB QE. Among other things, the authors find that the ‘shock and awe’ effectiveness of the first ‘quantitative treatment’ soon diminished. Liquidity is the methadone of the market, for QE to work in future, a larger and more targeted dose of monetary alchemy will be required.   The paper provides several interesting findings, for example, the Federal Reserve ‘taper-tantrum’ of 2013 and the Swiss National Bank decision to unpeg the Swiss Franc in 2015, did not appear to influence markets inside the Eurozone, once ECB president, Mario Draghi, had made its intensions plain. Nonetheless, the BIS conclude that (emphasis, once again, is mine):   …collateralized credit operations imply substantially less credit risks (by at least one order of magnitude in our crisis sample) than outright sovereign bond holdings per e1 bn of liquidity owing to a double recourse in the collateralized lending case. Implementing monetary policy via credit operations rather than asset holdings, whenever possible, therefore appears preferable from a risk efficiency perspective. Second, expanding the set of eligible assets during a liquidity crisis could help mitigate the procyclicality inherent in some central bank’s risk protection frameworks.   In other words, rather than exacerbate the widening of credit spreads by purchasing sovereign debt, it is preferable for central banks to lean against the ‘flight to quality’ tendency of market participants during times of stress.   The authors go on to look at recent literature on the stress-testing of central bank balance sheets, mainly focussing on analysis of the US Federal Reserve. Then they review ‘market-risk’ methods as a solution to the ‘credit-risk’ problem, employing non-Gaussian methods – a prescient approach after the unforeseen events of 2008.   Bagehot thou shouldst be living at this hour (with apologies to Wordsworth)   The BIS authors refer on several occasions to Bagehot. I wonder what he would make of the current state of central banking? Please indulge me in this aside.   Walter Bagehot (1826 to 1877) was appointed by Richard Cobden as the first editor of the Economist. He is also the author of perhaps the best known book on the function of the 19th century money markets, Lombard Street (published in 1873). He is famed for inventing the dictum that a central bank should ‘lend freely, at a penalty rate, against good collateral.’ In fact he never actually uttered these words, they have been implied. Even the concept of a ‘lender of last resort’, to which he refers, was not coined by him, it was first described by Henry Thornton in his 1802 treatise – An Enquiry into the Nature and Effects of the Paper Credit of Great Britain.   To understand what Bagehot was really saying in Lombard Street, this essay by Peter Conti-Brown – Misreading Walter Bagehot: What Lombard Street Really Means for Central Banking – provides an elegant insight: –   Lombard Street was not his effort to argue what the Bank of England should do during liquidity crises, as almost all people assume; it was an argument about what the Bank of England should openly acknowledge that it had already done.   Bagehot was a classical liberal, an advocate of the gold standard; I doubt he would approve of the nature of central banks today. He would, I believe, have thrown his lot in with the likes of George Selgin and other proponents of Free Banking.   Conclusion and Investment Opportunities   Given the weakness of European economies it seems unlikely that the ECB will be able to follow the lead of the Federal Reserve and raise interest rates in any meaningful way. The unwinding of, at least a portion of, QE might be easier, since many of these refinancing operations will naturally mature. For arguments both for and against CB balance sheet reduction this paper by Charles Goodhart – A Central Bank’s optimal balance sheet size? is well worth reviewing. A picture, however, is worth a thousand words, although I think the expected balance sheet reduction may be overly optimistic:   Come the next crisis, I expect the ECB to broaden the range of eligible securities and instruments that it is prepared to purchase. The ‘Draghi Put’ will gain greater credence as it encompasses a wider array of credits. The ‘Flight to Quality’ effect, driven by swathes of investors forsaking equities and corporate bonds, in favour of ‘risk-free’ government securities, will be shorter-lived and less extreme. The ‘Convergence Trade’ between the yields of European government bonds will regain pre-eminence; I can conceive the 10yr BTP/Bund spread testing zero.   None of this race to zero will happen in a straight line, but it is important not to lose sight of the combined power of qualitative and quantitative easing. The eventual ‘socialisation’ of common stock is already taking place in Japan. Make no mistake, it is already being contemplated by a central bank near you, right now.   Originally Published in In the Long Run     (We are now on your favourite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

When Government Interest Rates Go Lower than GDP Growth Rate

Colin Lloyd

Macro

Sustainable government debt – an old idea refreshed.   New research from the Peterson Institute suggests bond yields may fall once more Demographic forces and unfunded state liabilities point to an inevitable reckoning The next financial crisis may be assuaged with a mix of fiscal expansion plus QQE Pension fund return expectations for bonds and stocks need to be revised lower   The Peterson Institute has long been one of my favourite sources of original research in the field of economics. They generally support free-market ideas, although they are less than classically liberal in their approach. I was, nonetheless, surprised by the Presidential Lecture given at the annual gathering of the American Economic Association (AEA) by Olivier Blanchard, ex-IMF Chief Economist, now at the Peterson Institute – Public Debt and Low Interest Rates. The title is quite anodyne, the content may come to be regarded as incendiary. Here is part of his introduction:   Since 1980, interest rates on US government bonds have steadily decreased. They are now lower than the nominal growth rate, and according to current forecasts, this is expected to remain the case for the foreseeable future. 10-year US nominal rates hover around 3%, while forecasts of nominal growth are around 4% (2% real growth, 2% inflation). The inequality holds even more strongly in the other major advanced economies: The 10-year UK nominal rate is 1.3%, compared to forecasts of 10-year nominal growth around 3.6% (1.6% real, 2% inflation). The 10-year Euro nominal rate is 1.2%, compared to forecasts of 10-year nominal growth around 3.2% (1.5% real, 2% inflation). The 10-year Japanese nominal rate is 0.1%, compared to forecasts of 10-year nominal growth around 1.4% (1.0% real, 0.4% inflation).   The question this paper asks is what the implications of such low rates should be for government debt policy. It is an important question for at least two reasons. From a policy viewpoint, whether or not countries should reduce their debt, and by how much, is a central policy issue. From a theory viewpoint, one of pillars of macroeconomics is the assumption that people, firms, and governments are subject to intertemporal budget constraints. If the interest rate paid by the government is less the growth rate, then the intertemporal budget constraint facing the government no longer binds. What the government can and should do in this case is definitely worth exploring.   The paper reaches strong, and, I expect, surprising, conclusions. Put (too) simply, the signal sent by low rates is that not only debt may not have a substantial fiscal cost, but also that it may have limited welfare costs.   Blanchard’s conclusions may appear radical, yet, in my title, I refer to this as an old idea. Allow me to explain. In business it makes sense, all else equal, to borrow if the rate of interest paid on your loan is lower than the return from your project. At the national level, if the government can borrow at below the rate of GDP growth it should be sustainable, since, over time (assuming, of course, that it is not added to) the ratio of debt to GDP will naturally diminish.   There are plenty of reasons why such borrowing may have limitations, but what really interests me, in this thought provoking lecture, is the reason governments can borrow at such low rates in the first instance. One argument is that as GDP grows, so does the size of the tax base, in other words, future taxation should be capable of covering the on-going interest on today’s government borrowing: the market should do the rest. Put another way, if a government becomes overly profligate, yields will rise. If borrowing costs exceed the expected rate of GDP there may be a panicked liquidation by investors. A government’s ability to borrow will be severely curtailed in this scenario, hence the healthy obsession, of many finance ministers, with debt to GDP ratios.   There are three factors which distort the cosy relationship between the lower yield of ‘risk-free’ government bonds and the higher percentage levels of GDP growth seen in most developed countries; investment regulations, unfunded liabilities and fractional reserve bank lending.   Let us begin with investment regulations, specifically in relation to the constraints imposed on pension funds and insurance companies. These institutions are hampered by prudential measures intended to guarantee that they are capable of meeting payment obligations to their customers in a timely manner. Mandated investment in liquid assets are a key construct: government bonds form a large percentage of their investments. As if this was not sufficient incentive, institutions are also encouraged to purchase government bonds as a result of the zero capital requirements for holding these assets under Basel rules.   A second factor is the uncounted, unfunded, liabilities of state pension funds and public healthcare spending. I refer to John Mauldin on this subject. The 8th of his Train-Wreck series is entitled Unfunded Promises – the author begins his calculation of total US debt with the face amount of all outstanding Treasury paper, at $21.2tr it amounts to approximately 105% of GDP. This is where the calculations become disturbing:   If you add in state and local debt, that adds another $3.1 trillion to bring total government debt in the US to $24.3 trillion or more than 120% of GDP.   Mauldin goes on to suggest that this still underestimates the true cost. He turns to the Congressional Budget Office 2018 Long-Term Budget Outlook – which assumes that federal spending will grow significantly faster than federal revenue. On the basis of their assumptions, all federal tax revenues will be consumed in meeting social security, health care and interest expenditures by 2041.   Extrapolate this logic to other developed economies, especially those with more generous welfare commitments than the US, and the outlook for rapidly aging, welfare addicted developed countries is bleak. In a 2017 white paper by Mercer for World Economic Forum – We Will Live to 100 – the author estimates that the unfunded liabilities of US, UK, Netherlands, Japan, Australia, Canada, China and India will rise from $70trln in 2015 to $400trln in 2050. These countries represent roughly 60% of global GDP. I extrapolate global unfunded liabilities of around $120trln today rising to nearer $650trln within 20 years:   Source: Mercer Analysis   For an in depth analysis of the global pension crisis this 2016 research paper from Citi GPS – The Coming Pensions Crisis – is a mine of information.   In case you are still wondering how, on earth, we got here? This chart from Money Week shows how a combination of increased fiscal spending (to offset the effect of the bursting of the tech bubble in 2000) combined with the dramatic fall in interest rates (since the great financial recession of 2008/2009) has damaged the US state pension system:   Source: Moneyweek   The yield on US Treasury bonds has remained structurally higher than most of the bonds of Europe and any of Japan, for at least a decade.   The third factor is the fractional reserve banking system. Banks serve a useful purpose intermediating between borrowers and lenders. They are the levers of the credit cycle, but their very existence is testament to their usefulness to their governments, by whom they are esteemed for their ability to purchase government debt. I discuss – A history of Fractional Reserve Banking – or why interest rates are the most important influence on stock market valuations? in a two part essay I wrote for the Cobden Centre in October 2016. In it I suggest that the UK banking system, led by the Bank of England, has enabled the UK government to borrow at around 3% below the ‘natural rate’ of interest for more than 300 years. The recent introduction of quantitative easing has only exaggerated the artificial suppression of government borrowing costs.   Before you conclude that I am on a mission to change the world financial system, I wish to point out that if this suppression of borrowing costs has been the case for more than 300 years, there is no reason why it should not continue.   Which brings us back to Blanchard’s lecture at the AEA. Given the magnitude of unfunded liabilities, the low yield on government bonds is, perhaps, even more remarkable. More alarmingly, it reinforces Blanchard’s observation about the greater scope for government borrowing: although the author is at pains to advocate fiscal rectitude. If economic growth in developed economies stalls, as it has for much of the past two decades in Japan, then a Japanese redux will occur in other developed countries. The ‘risk-free’ rate across all developed countries will gravitate towards the zero bound with a commensurate flattening in yield curves. Over the medium term (the next decade or two) an increasing burden of government debt can probably be managed. Some of the new borrowing may even be diverted to investments which support higher economic growth. The end-game, however, will be a monumental reckoning, involving wholesale debt forgiveness. The challenge, as always, will be to anticipate the inflection point.   Conclusion and Investment Opportunities   Since the early-1990’s analysts have been predicting the end of the bond bull market. Until quite recently it was assumed that negative government bond yields were a temporary aberration reflecting stressed market conditions. When German schuldscheine (the promissory notes of the German banking system) traded briefly below the yield of German Bunds, during the reunification in 1989, the ‘liquidity anomaly’ was soon rectified. There has been a sea-change, for a decade since 2008, US 30yr interest rate swaps traded at a yield discount to US Treasuries – for more on this subject please see – Macro Letter – No 74 – 07-04-2017 – US 30yr Swaps have yielded less than Treasuries since 2008 – does it matter?   With the collapse in interest rates and bond yields, the unfunded liabilities of governments in developed economies has ballooned. A solution to the ‘pension crisis,’ higher bond yields, would sow the seeds of a wider economic crisis. Whilst governments still control their fiat currencies and their central banks dictate the rate of interest, there is still time – though, I doubt, the political will – to make the gradual adjustments necessary to right the ship.   I have been waiting for US 10yr yields to reach 4.5%, I may be disappointed. For investors in fixed income securities, the bond bull market has yet to run its course. Negative inflation adjusted returns will become the norm for risk-free assets. Stock markets may be range-bound for a protracted period as return expectations adjust to a structurally weaker economic growth environment.   Originally Published in In the Long Run     (We are now on your favourite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Calorie Counter: Why "Calories In, Calories Out" Hypothesis May Not Be Telling You the Whole Story

Dr Arun K Chopra

Ship Shape

We discussed some limitations of the “Calories In, Calories Out” hypothesis here last week, which suggested that assiduously counting one's calories does not help in fat loss or weight maintenance.   Are we really looking at the “Death of the Calorie”, as suggested by a recent cover story of The Economist's 1843 Magazine?   Do calories matter at all? Of course, they do! But not in the way most people think.   Calories In   Let's look at why calories are thought of to be critical for weight loss/gain in the first place. If we were to give a group of people food prepared with a pre-decided calorie and nutrient content (ratio of carbs: fat: proteins), and control everything else (i.e., work done, calories spent, etc.) without allowing for any extra food, one is unlikely to observe much difference in the weight lost or gained in the short term (within a few days or weeks).   However, there is now sufficient data available that proves that different foods are handled by the body differently. While carbohydrates are quickly digested (within 2-3 hours, with lesser calories burnt), proteins and fats take much longer (about 8-12 hours, with many more calories burnt in the process).    This implies that net calories gained from eating foods is different from a simple addition of all calories consumed.     Further, the gut responds differently to different foods. The hormones that signal satiety (a sense of fullness after a meal) are secreted most profusely in response to fat consumption. Therefore, one feels fuller earlier and for a longer duration while consuming a fatty meal vs. a carbohydrate-based meal.   There is data available which proves that the net calorie intake in a day is lower if the fat content is higher than the recommended 30% of all calories (perhaps closer to 40-50%). And that’s how calories indirectly kick in - if we consume more fat and protein compared to carbs, we eat lesser (total calories in a day) and crave food less than if we consume >50-55% calories from carbs (which is the norm in most parts of the World today).   Over the long term, a difference of 100-300 calories per day in the food intake and the extra work done in digesting fats and proteins is responsible for the greater weight loss and fat loss that’s seen with low carb diets.   Also, the real world is not like a controlled study. Here, there are meetings, deadlines, sleepless nights, travel, parties, and much more, which makes it much harder to ensure portion control. The easy digestion of carbs encourages frequent snacking, often leading one to consume unnecessary calories. Carbohydrates are also known to activate brain reward mechanisms, that lead to the pleasurable experience associated with intake of high-calorie, especially high sugar foods, that one craves repeatedly, which is why people get addicted to desserts and colas.   Is it then not better to eat foods that provide early satiety, that we can eat to fullness, not feel hungry or cranky all the time, rather than eating a measured proportion of food that leaves one dissatisfied?   Calories Out   The other side of the equation is exercise. Undoubtedly one of our healthiest activities - exercise, goes a long way in maintaining health and fitness. It also helps in weight maintenance, when accompanied by a good diet. However, despite being intuitive, it comes as a surprise that in the absence of a healthy diet, exercise alone does not help in weight loss, especially as one ages. A study published in 2006 showed that even regular runners would gain weight year on year, unless they increased their running distance by about 3 km per week for men, and nearly 5 km for women.   This suggests that people running in their 20s would need to run marathons every week in their 50s in order to avoid gaining weight - a very impractical idea. The only exception could be elite athletes, who can burn over 1000 calories in their workouts.     Another problem is the apparent lack of energy or disinclination to exercise that many people face after a few weeks of diet control. Again, the carbs in our diet make everything else (fats and proteins) unusable for providing energy - carbohydrates get digested first while all the other calories get stored as body fat. This means that one has to eat still more to find the energy to exercise or eat less and feel lethargic all the time. Also, as one grows heavier by eating more, the appetite increases further, and a vicious cycle ensues.   Thus, our appetite appears to be a function of what we eat, as is our ability to exercise.   Conclusion: A healthy diet is better than calculating CICO and getting frustrated.   In summary, calories do matter, but different foods are metabolized differently by the body, and thus have variable effects. If you are watching your calories, please watch your food choices first. Moving towards healthy foods may avoid the need for any rigorous portion control or calorie count, which is nearly impossible to achieve in any case. Eating healthy also improves our exercise capacity, making the circle complete.   This is a recurring column published every Sunday. Click here to view my other articles on health, nutrition and exercise.    (We are now on your favourite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Count Your Calories: Is "Calories In, Calories Out" A Weight Loss Tip or a Myth

Dr Arun K Chopra

Ship Shape

The Amercian doctor and author LH Peters in what was perhaps the first ever book on nutrition Diet & Health: With Key to the Calories proclaimed:   “You may eat what you like… but count your calories!”   Published in 1918, and the first one to sell over a million copies, Diet and Health led to a mathematical understanding of obesity - the so-called CICO hypothesis (Calories in-Calories out).   A relatively simple concept which is easily understood by most:   One gets fat if one eats more and/or exercises less than is needed according to one’s height and age.   In order to lose weight, all that one needs to do is to reverse this equation i.e. exercise more and eat less. This seemingly simple process has been followed, and is still sworn to by millions of individuals worldwide in a bid to lose fat and become fit. However...mostly in vain.     Studies over the last century have shown consistent results. Most serious dieters lose 5-10 kg in the first 3-6 months, after which their progress plateaus, followed by a slow regain of the lost weight (roughly 90% of the dieters regain at least half of their lost weight within 2 years; many of them even more). As most succumb to inertia, leading to lethargy and overconsumption, they remain overweight/obese. It would be worthwhile to ask why efforts of millions of humans and countless scientists failed to crack this seemingly straight forward task?   As Sherlock Holmes said to his friend Dr. Watson, “…when you have excluded the impossible, whatever remains, however improbable, must be the truth.” This might leave us with an uncomfortable truth - there must be something wrong with the CICO hypothesis.   A Calorie (or a Kilo calorie-kcal, or 1000 calories-cal) is a unit of heat-defined as the amount of energy needed to raise the temperature of 1 liter of water by 1° C. Wilbur Atwater calculated the energy content of foods in the late 19th century, and reached the conclusion that the energy content of carbohydrates and proteins is approximately 4 kcal/ g, while that of fat is 9 kcal/g. The average American adult was thought to need about 2000 cal/day for females and 2500 cal/day for males. The average calories needed to be burnt to lose 1 lb weight was estimated around 3500 cal (~7700 cal for 1 kg). This led itself to an easy interpretation - eat 500 calories less per day, remain active, and one should lose 1 lb per week (or 1 kg every fortnight).   This should be easily achievable by any dedicated dieter, and obesity should be rare. The facts, however, are mind boggling. Obesity has multiplied over 3-fold in the last 40 years, diabetes by 4-fold, and heart diseases have become mankind’s biggest scourge in this period. So, what are we missing?   What Are the Problems with Calculations of Calories Consumed?   The calorie concept does not differentiate between nutrients themselves, meaning that the calories from sugar, butter and meat have the same response once digested. However, we now know that while carbs are easily digested within 2-3 hours, proteins take over 6-8 hours, and fats even longer (8-12 hours). This means that not only does one remain full for a longer period after a protein and/or fat-based meals, but also burns more calories in digesting them as well. This shows that eating and digesting are more complex compared to the simple math given above.   The calorie concept also does not differentiate between the sources of nutrients, assuming all to have similar effects when eaten. For instance, all carbs are treated similarly, whether simple sugars, starchy vegetables like potatoes, or fiber-rich salads. While sugars are digested easily and lead to an early peaking of blood sugar levels, potatoes are metabolized much more slowly, even though ultimately even they get converted to simple glucose molecules. And fiber is indigestible, so has practically no calories (as the few calories that the food containing high fiber releases are used up in digesting them).   It also does not account for the effects of cooking upon foods. Chopping and cutting food items simplify the work of digestion, as does cooking, by breaking down food into an easily digestible form, making the calorie content higher. Simply cooling a food after cooking and reheating it lowers its calorie content somewhat, making it very difficult to count the exact amount consumed.   What Are the Problems with Calculating Calories Burnt?   The other side of the CICO hypothesis is fraught with similar complications. Excluding young athletes, one cannot exercise enough to lose weight consistently or even keep it off, if one is not watching one’s diet. For instance, it would take an average adult 3-4 hours of brisk walk to burn the calories contained in a scoop of ice cream, a piece of chocolate, or the friendly samosa.     For a normal healthy adult who is not into heavy running or weightlifting, the maximum amount of energy is consumed in unconscious non-exercise related activities like digesting food, regular working of the organs, especially the brain, and minor home and work-related activities.   The calories used in different exercises are dependent upon a lot of factors-age, body weight, current level of fitness and ambient temperature, among others. It is notoriously difficult to estimate the calories burnt in any activity, and the digital monitors on treadmills don’t really help.   Metabolism acts differently in different people, which is why some people rarely gain fat, while others struggle to lose it despite eating much less. This upholds genetic factors (some of which are under research presently) which determine how many calories are taken up and how many are passed out undigested.   Hence even if the CICO is a valid hypothesis, it is a rather complex calculation with many imponderables. If one were to eat the equivalent of just 20 calories extra per day, one would gain about 1 kg every year, everything else remaining the same. Also, one rarely sees the predicted results within expected time. Eating 500 calories per day less than usual for 1 month doesn’t bring about a weight loss of 2 kg, hardly 1 kg or so. And most people feel tired, often listless, lacking in energy and the drive to exercise, and dream about splurging on their favorite dessert; which they do, sooner or later. And that too lasts a few months; soon there is a plateau followed by a slow regain.   So, do calories not matter at all? Of course, they do. But not in the way people think!   More on this next week.   This is a recurring column published every Sunday. Click here to view my other articles on health, nutrition and exercise.    (We are now on your favourite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

How Much Choice Should Consumers Have?

INSEAD Knowledge

Consumer Behaviour

Amitava Chattopadhyay, INSEAD Professor of Marketing.   The retail sector bends over backwards to give the consumer an inordinate number of product choices, and yet, does the consumer want so much variety?   Variety packs are everywhere, whether they’re multi-coloured sock packs, multi-flavour yoghurts or multi-packs of chocolate bars. Retailers believe that bundling different items together is answering consumer demand, but the reality is something else. In fact, they would do better to offer more of the same in bundled packs. We all have preferences, whether it is in terms of our favourite colour, flavour or song, and when we find ourselves in the supermarket aisles, buying in bulk, these preferences make themselves no less felt.     However, as our research shows, a lot of it has to do with how many choices we are actually making when we stand in front of the supermarket shelf.  In my paper, “The Offer Framing Effect: Choosing Single versus Bundled Offerings Affects Variety Seeking” co-authored with Mauricio Mittelman, Eduardo B. Andrade and C. Miguel Brendl, we show that when there is only one choice act to make, participants were systematically less interested in seeking variety in their product choices.    This is the case when we are looking at, for example, a six-pack of soft drinks — in order to purchase six cans, we only have to make one choice if they are packaged together — compared to buying six cans individually where we are making a choice six times over.  The implication for retailers is that you don’t have to indulge in price promotion on multi-packs because by offering consumers more of the same, they will get what they want and be happy to pay the price.   Variety is the Spice of Life   As individuals, when we have the option of making more than one choice, we tend to seek variety.   In our first experiment participants were presented with cans of Coca-Cola and Sprite and asked to select two. Of those that were asked to make two separate decisions – choosing one can each time - 62% chose two different drinks.  However when participants were asked to make one decision and choose between two packaged drinks (either two Coke, two Sprite or a can of each) only 32% chose the mixed option.   What was most interesting in our research was that of the participants who were given two choices, 12 out of 41 who had expressed a strong preference for either Coca-Cola or Sprite, made a different selection in their second choice. While none of the participants who had a strong preference and were asked to choose from the two-packs selections chose the mixed option.   In a further experiment to determine how strongly the wish to make a different choice made itself felt, we went as far to pit the amount of variety possible in the choice process against the amount of variety the participants would end up with in their set of chosen items.  Here, there was once again a comparison made between having one choice to make or two choices. One set of students was asked to choose between a high variety candy selection or a low variety candy selection, where only one decision was needed. A second group of students was asked to choose six candies in two stages. In the first stage, they were given a high variety consisting of one cherry, grape and apricot candy and then in the second stage, they had to choose between a high variety bundle of cherry, grape and apricot candies (the same as they already had) or a low variety bundle of three cherry candies.     As expected, participants avoided choosing the same thing they already had and preferred to choose something different even though ultimately it meant they had less variety in the candies they owned at the end. In the first bundled option where only one decision was made, 66% of participants ended up with the high variety offering, whereas in the two-stage decision process, only 36% of participants ended up with the high variety option.   This strong desire to feel that a different choice has been made during the choice process is good news for smaller stores and for retailers hoping to introduce new products to the market.  Customers who tend to shop in smaller stores are generally buying in smaller quantities and can more easily be captured as they will be more inclined to seek variety.   24/7   A further implication is for online retailers as our findings hint that more sales of the same item are likely to be made online compared to shopping in-store. This is because when purchases are made online, only one choice is needed per item — that of typing the number of required items in the designated spot on the screen.  This simplifies the purchase process compared to an in-store experience where multiple choices are made when purchasing single offerings. For bricks-and-mortar retailers, the emphasis should be on offering single serves, whereas online retailers should expect to sell more bundled items.   So, whether we’re looking at the question of how the products are packaged from a retailer’s point of view, or from the consumer’s point of view, one thing is clear, choices will always be there.  The important understanding for retailers is that they have the ability to bundle or separate items depending on what they want to push —and without the need to discount as originally thought.   For consumers, you should know that if you’re buying in large quantities, you can take comfort from knowing you’d be better to buy more of the same in a bundled pack with the added benefit that those pineapple-flavoured yoghurts you weren’t too keen on anyway in the multipack, will not go off in your fridge. And for consumers who tend to buy in smaller quantities, be reassured that our desire to seek variety is often stronger than our logic telling us what our preferences are.  Do we want so much variety in our shopping experience? The answer is yes and no.   Amitava Chattopadhyay is The GlaxoSmithKline Chaired Professor in Corporate Innovation at INSEAD. He is also co-author of The New Emerging Market Multinationals: Four Strategies for Disrupting Markets and Building Brands. You can follow him on Twitter @AmitavaChats.   Follow INSEAD Knowledge on Twitter and Facebook   This article is republished courtesy of INSEAD Knowledge. Copyright INSEAD 2018.   (We are now on your favourite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Can A Social Beverage Save Your Life: The Good, The Bad, and The Ugly Side of Alcohol

Dr Arun K Chopra

Ship Shape

Alcohol is commonly perceived as a "social beverage" - a friendly drink to lighten up your mood, relieve tension, induce sound sleep, and to top it all...protect one from heart attacks! Could you have asked for more!   Alcohol has been part of our culture for nearly 5000 years (described as Sura in the Vedic period). According to the WHO Global Status Report on alcohol and health 2018, nearly a third of the population of the world (age 15 years or more) - a mind-boggling 2.35 billion people (about 39% of males, 25% of females) consume alcohol. This is nearly twice the population of India. Just slightly more, about 2.4 billion are abstainers, while just over 2/3 of a billion are former drinkers who have now quit. Over a quarter of the world’s population between 15 -19 years, and 1/3 to over 1/2 of the 20-24 year old group, currently drink.   Studies conducted across the globe have common findings - low to moderate drinking protects against heart attacks, and probably diabetes as well. Only high drinkers (> 4 drinks per day for men, > 3 drinks for women) have higher mortality, largely due to alcoholic liver diseases (fatty liver, cirrhosis, etc.) and rare higher volume drinkers risk the chance of a heart failure (alcoholic cardiomyopathy).     The common thread is the presence of a J-shaped curve, i.e., low to moderate drinkers are less prone to some diseases than always abstainers (non-drinkers). The risk rises once people start consuming high volumes of alcohol daily - the most consistent evidence being found for heart attacks or occurrence of Coronary Artery Disease (CAD). It would be interesting to note that some of the diseases apparently less likely in low-volume drinkers are deafness, hip fractures, common cold, dementia, cancers and even cirrhosis liver.   This concept came about from the so-called French Paradox. The French consumed large amounts of saturated fats and smoked regularly, yet had a much lesser risk of CAD than other populations. Wine was postulated to be one of the possible causes for this unexpected finding.   Guidelines have consistently permitted (even endorsed) low-volume alcohol consumption regularly as a protective measure against heart attacks (Coronary artery disease or CAD), while stopping short of recommending never drinkers to start drinking, as the data wasn’t solid enough. Red wine has been noted to have the maximum data, probably due to the presence of a compound called resvetarol, and some other compounds.     All-in-all, the overriding belief has been one of tangible benefits with modest regular consumption of alcohol, apart from its positive social implications.    No wonder, a recent study conducted by AIIMS, New Delhi, reported alcohol as the most common drug used for substance abuse. Nearly 15% of the adult Indian population were regular drinkers, 1in 5 being addicted to or dependent on it.   Independent analysts, however, found these conclusions problematic. Subjects reporting self-consumption generally tend to under-report the amount of alcohol consumed. Moreover, data on patterns of drinking and binge drinking are often missing, and several health problems associated with alcohol tend to be ignored (uncontrolled blood pressure, accidents, inter-personal violence, depression, etc.). Many studies had been pooling previous drinkers who gave up drinking, often due to diseases, in the same group as never-drinkers, further confounding the analysis.     Recently however, a shift has been noted. Since it is a little tricky to collect data on huge numbers of individuals (hundreds of thousands, followed up over several years, in different age groups) to conclude benefit or harm with social drinking, combining results of multiple studies with similar design is a common statistical method used in clinical Medicine (meta-analysis and systematic review). This provides a rather unique look into the outcomes of millions of individuals from different parts of the world.   A major such analysis in 2016 on nearly 4 million individuals found no evidence of a survival benefit with alcohol. Next year, they analysed the impact on CAD events, and failed to find solid evidence of a positive effect on nearly 3 million subjects. This analysis found a protection against heart attacks in Whites over the age of 55 with low to moderate consumption, but none in individuals < 55 years of age, or in Asians.   So, to drink or not to drink, that is the question!   This was the background for the largest ever study on alcohol in 2016. Funded by a neutral organisation (Bill & Melinda Gates Foundation), the investigators reported on the outcomes associated with alcohol consumption in 28 million individuals from 195 countries and territories (in age group 15 years to 95 and above) from 1990-2016 by the Global Burden of Disease (GBD) 2016 Alcohol Collaborators in August, 2018.   They found that alcohol was associated with a disturbing 2.8 million deaths in 2016 vs 5.5 million associated with smoking, making it the seventh largest contributor to disease and mortality. Further, in the age group 15-49 years, it was the leading risk factor for disease burden worldwide, including 12.2% of all male deaths and 3.8% of female deaths. Apart from alcoholic liver disease, its regular consumption also increased the risk of uncontrolled hypertension, strokes (clot formation or bleeding inside the brain), tuberculosis, road accidents and several types of cancers. Data for interpersonal violence was missing, and may further increase the morbidity than what is reported here.   Some protection against heart attack was found in females > 50 years of age, and males > 60 years. This showed a J-shape curve, with the maximum benefit being for low-volume drinkers, consuming < 1 standard drink per day (10 g of ethyl alcohol). However, this benefit was offset by the much greater increase in risk of the above mentioned diseases, ultimately leading to no beneficial effect.   The investigators concluded that the net amount of alcohol correlating with minimum risk of disease or mortality is zero. As guidelines continue to uphold the cardio-protective effects of alcohol, a revision is urgently needed to correct this fallacy and prevent a big chunk of preventable diseases, just as for smoking.   In summary, alcohol is not the panacea it is often made out to be. Occasional social drinking, < 1 drink per week may be acceptable and even beneficial for the heart. Regular drinking is not better for heart attack prevention, and in fact, increases the risk of multiple other health issues, totally reversing all the putative cardiac benefits.   This is a recurring column published every Sunday. Click here to view my other articles on health, nutrition and exercise.    (We are now on your favorite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

The Amazing Power of Long-Game Plan

RichifyMeClub

Investment

In 1964, Gary Flandro stated that the outer planets – Jupiter, Saturn, Uranus, and Neptune – would align in a rare pattern in the late 1970s. And NASA scientists really wanted to catch hold of such an event that occurred only once in 2 centuries - 176 years to be very precise. But how? By launching a mission that would take a road trip towards these giant marbles. In less time. With less costs involved.     Determined to not let go of this opportunity, they approached the then President of The United States, Richard Nixon, and apprised him of their plan. To their utter surprise, they received the green signal for the mission but with limits. A limit that would restrict them to visit just 2 planets at a time. That too for a short period of time. But NASA had other plans. A long-game plan.   Finally, in 1977, their eagerness came into existence.   One-by-one, two space probes were launched. Voyager 2 followed by Voyager 1 after 16 days. The former was intended to visit all 4 planets while the latter would follow a different trajectory covering Jupiter and Saturn.     Initially, these twins were commissioned for planetary exploration. But later on, their missions were extended. To explore the outer limits of the solar system. To find the traces of extraterrestrial life deep into the interstellar space. Space where no man-made object, in the history of mankind, had stepped in. NASA’s long-game plan had planned a mission within a mission.   It’s been 41 years now. Under the influence of ungrateful radiations and insane temperatures, they’re invading the deep space like never before. Beating the Sun’s gravitational pull, a lovely small space probe is still going strong at a speed of 15km/s in an insanely huge universe. By the time you scroll down to read further, it’ll complete a full marathon.   But how did NASA’s long-game plan help? Well, what the mission has uncovered so far was unknown to humans for the past many centuries. It’s the long-game plan that reported the presence of an active 350-years old giant cyclone on Jupiter, projected Saturn’s polar regions to the world, and found an answer to the unsolved mysteries that baffled even the great astronomers Galileo and Copernicus.   Let us now talk about its relevance in the field of investing.   When we start a SIP (Systematic Investment Plan), the initial contributions will never create something enormous instantly. In fact, nothing changes drastically during the initial few weeks, months, or even years. Rather, it turns out to be extremely boring. And whosoever loves staying monotonous throughout the journey!   And that’s why we love playing short games. The games that produce immediate results. Going through a summary instead of reading The Intelligent Investor. Tweeting rather than writing a blog post. Munching a Bournville instead of running on a treadmill. Short games are extremely addictive. Believing that just Rs. 5,000 a month is not going to make us rich tomorrow, we convert it into something that makes us really happy on-the-spot. We tend to overlook the potential of a paltry amount of money ribboned with a long-game plan.   The problem with the short game is that the costs are small and never seem to matter much on any given day. Saving $5 today won’t make you a millionaire. Going to the gym won’t make you fit. Reading a book won’t make you smart. Since the results are not immediate, we revert back to the short game.  ~ Shane Parish, Farnam Street   For instance, when we increase our monthly SIP amount by 5%, 10%, or even 12% every year, the end result will cease to change during the initial years of compounding. The difference in the end result between an increment of 5% and 12% seems irrelevant. In fact, it stays neck-to-neck for the first 7-10 years.   But as the game gets elongated further, this difference starts exploding. The end results start taking the shape of a snowball. The variance that looks minimal in the first place, starts growing bigger and bigger. In a long-game plan of 40 years, an annual step-up of 12% instead of 5% in a SIP of Rs.5,000 a month makes your corpus bigger by Rs.10cr. A huge disparity.   On the contrary, this variation reduces to just Rs. 4 Lakh with a short-game plan of 10 years. And much less if planned even shorter.    Source: RichifyMeClub   When days are turned into months, months into years, and years into decades, the compounding treadmill starts accomplishing higher speed. The little and tiny contributions that look unbelievably small initially, become enormous after decades of compounding. What looks extremely meaningless in a short game, becomes something that is difficult to avoid in a long-game plan.   James Clear, the author of The Atomic Habits, has written on a broad scale about forming habits with a long-game plan. As per the book, we grossly miscalculate the importance of small and little efforts that contribute towards achieving our goals. If we swim for 40 minutes for a week, we still don’t get lean. If we cook for a week, we still don’t become a skilled chef. If we write for a week, we still don’t become a prolific writer. But when continued for months and years, the results turn out to be worthy. Certainly, the long-game plan works in forming a habit too.   If we wrap our mentality of getting better with just 1% every day, we’ll end up getting 37-times better at the end of the year. And this keeps compounding year after year. The below equation by James proves that following a system with a long-game plan does benefit us immensely. Unquestionably!!!   1% better every day for 1-year: (1.01)365 = 37.78 1% worse every day for 1-year: (o.99)365 = 00.03 James Clear, The Atomic Habits   Kathleen Magowan, despite being a teacher for 35 long years, amassed a fortune of $6 million. Throughout her life, she never attracted anyone’s attention. She lived frugally in the same inherited house. Sweet. Compassionate. She always preferred to maintain a low-key profile. Surprisingly, after her death in 2011, it was revealed that even her Quaker Oats boxes were worth $183K as they contained the savings bonds of the 1940s and 50s. Yes, she invested with a long-game plan.   Same goes for Anne Scheiber, who built a massive corpus of $22 Million despite living on a pension of $3,100. Yes, she invested with a long-game plan. And just like Voyager 2, Warren Buffett is still going strong even at the age of 88. Yes, he has been investing with a long-game plan since the age of 11.   Originally Published in RichifyMeClub.   (We are now on your favourite messaging app – WhatsApp. We highly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Storytelling: More Than a Presentation Tool

INSEAD Knowledge

Culture

By Roger Jones, CEO, Vantage Hill Partners   How a Story Can Make a Strategy Come to Life.   We all love a good story. Perhaps for you it’s the suspense and rollercoaster of emotions of an action movie or the page turning intrigue of a novel. We can’t wait to see what happens next as we immerse ourselves and drift into another world.   Many executives too have discovered the power of leadership storytelling, but most only see stories as a tool to bring presentations to life. And many leaders find greater comfort just using logic. However, when used wisely stories can be employed to engage with your stakeholders emotions, change attitudes and behaviors, and, importantly, make change happen.   Learning Stories   I recall Sandra (not her real name), a recently appointed CEO of a technology company. When we first met she expressed her frustration that the changes she needed to make were not happening as smoothly or as quickly as she’d hoped, everything from getting her senior managers to implement the culture change needed through to her top team supporting her future plans.   Sandra then shared her plans with me. They looked really well thought through: detailed analysis of competitors, market trends and financials, a clear customer value proposition. To her it was all reassuringly data driven and she wondered why resistance to them was so rife.   I then asked her how her children learnt about the world when they were young. She was silent for a few moments and I think a little perplexed why I should ask such a question when what was top of her mind was getting things done in her business. Her response was one word: stories. Sandra then rattled off the names of a number of books she remembers her children reading, Topsy and Tim Go To The Dentist and Peppa Pig: Recycling Fun.     We then spoke about why these and other stories helped her children develop. We explored topics such as how stories helped her children put their world into context, showing them how things work, how we relate to each other and feel comfortable with new experiences. A broad smile appeared on Sandra’s face when she told me how one day when her son was four years old, she remembered walking into his bedroom to see him tidying up, a first for him. He told his mum he was doing this as a character in a story he’d seen tidying their bedroom. At this point Sandra laughed and said stories even inspire action.   It was then that Sandra had what can only be described as her light bulb moment. She declared she was a ‘logic junkie’ (her words not mine) and hadn’t thought of how she might use narratives for business aims. Then she asked,   How can I use stories to inspire change, when all my creative juices were sucked out of me way back in my career.   Becoming a Storyteller   A reasonable question, to which I responded:    How would a documentary film maker explain your culture change to senior managers or your future plans to your top team; how would an artist, a creative writer, a teacher or a customer do the same?   This type of thinking was new for Sandra, and it took her a little time to become playful and share her random ideas. Over a short period of time we lifted her future plans off the page and turned them into a future story that she shared with her top team who, although they had many questions, seemed to feel comfortable with her vision.   Between us, we thought of how we might describe the story of the culture change, and devised a framework for her senior managers to create the story themselves so they felt a greater sense of ownership.   At first, Sandra felt nervous using storytelling rather than her more customary factual approach, but when she saw how facts can be wrapped in the narrative to make them more memorable, and how resistance to her initiative was beginning to melt away her nervousness evaporated.   Sandra started to see other benefits too. She felt there was greater trust in her top team after our away day in which team members disclosed experiences from their early life that had shaped their leadership values. The business development teams' results also improved by weaving stories into their traditional sales approach.   Clearly to achieve all these positive results Sandra brought other approaches into play such as fair-process and adopting a participative style, but she gives a significant amount of credit down to her adopting a storytelling approach.   If using the narrative is something you seldom do, then the first step in becoming an effective leadership storyteller is to use your personal experiences. Here are some dos and don'ts to help get you started:   1. Draw a straight line on a sheet of A4 paper and mark your age from birth to today on it. Then note down on the time-line experiences you can remember and what you learnt from each, these might be anything from something a teacher said to you, through to what you learnt if you failed your first driving test.   2. Select three or four experiences that were significant for you and that you will be happy to share, though nothing boastful.   3. Now, craft these experiences into stories by putting them into this structure: introduce the characters; begin the journey; have an element of surprise or a point that made you have a light bulb moment; then resolve the story. Finally, share with your audience what your learnt as a result of the experience.   4. Think how you might use each story. Perhaps one will help encourage change, another trust or collaboration, and so on.   5. Keep your story short. Three minutes is long enough.   6. When sharing your story tell people how you felt during the experience.   As you start to feel comfortable using your personal stories, you might then like to think how you can bring your company vision, strategy or company values to life using organizational stories.   Roger Jones is an executive and top team coach, and author of The Storytelling Pocketbook.  An alumnus of INSEAD’s EMCCC, the Financial Times have featured his leadership storytelling work.   Follow INSEAD Knowledge on Twitter and Facebook.   This article is republished courtesy of INSEAD Knowledge. Copyright INSEAD 2018.   (We are now on your favorite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

How Airlines Manage Conflicts Between Profits and Safety

INSEAD Knowledge

Aviation

By Henrich Greve, INSEAD Professor of Entrepreneurship, and Vibha Gaba, INSEAD Associate Professor of Entrepreneurship   Warning: Don’t Read this Just Before Your Next Flight.   Commercial air travel is an industry in which relatively small mistakes can result in disproportionately dire consequences. While it is best not to think about this when on the tarmac, it is comforting to know that safety, for airlines, is a major priority. Even so, there are limits to how much an airline can spend, and firms must balance the demands of safety and profitability to avoid running financially aground.   In other words, when it comes to safety, it is not so much a matter of “how safe can we be?” as “how safe can we afford to be?”   The question of safety vs. profitability is an example of the conflicting operational objectives firms face on a regular basis and the focus of our recent research, “Safe or Profitable? The Pursuit of Conflicting Goals” (forthcoming in Organization Science).   The study examined how airlines balance the dual focus of safety and profits, and the influence these factors have on the costly decision of whether to change the configuration of their fleet of aircraft after an accident.   Updating fleets, replacing older aircraft or those perceived to be less safe, with newer, more reliable models, is an important way that airlines ensure the safety of their operation. However, fleet replacement can be a costly transaction involving selling at a discount and buying at a premium, and decisions are not made without close scrutiny of an airline’s balance sheet.   It may seem intuitive that more profitable airlines are in a better position, and therefore more likely, to replace aircraft perceived as less safe. We found that this was not the case. In fact, while more profitable airlines are generally ahead on the safety front, when it comes to making changes to their fleet after an accident, it was the less profitable carriers that were more likely to sell off aircraft and replace them with models considered more reliable.   Less Profitable Firms are More Reactive   To track aircraft sales and purchases, we used fleet composition data from the website www.airfleets.net, which includes full data on passenger aircraft across the industry, as well as accident records of all global airlines. We then narrowed these accident statistics down to those accidents in which an aircraft was deemed permanently unfit to fly (referred to as “hull loss accidents”).   An analysis of these statistics showed that following a hull loss accident, among the group of airlines that boast above-average safety records, low-profit carriers increased aircraft sales by 55% while high-profit airlines increased aircraft sales by 29%.   Profitability played an even more decisive role among airlines with relatively high accident rates. When we assessed airlines with a similar below-average safety record, firms with low profitability were 50% more likely to sell aircraft than those with higher profitability.   We also examined the tenor of media coverage for each aircraft model following an accident and found that public relations, while not as influential as accident rates, were a consideration for decision makers. Less profitable airlines were more inclined to sell when the media tenor regarding their fleet was least favourable.   In short, while underperforming airlines were more likely to replace aircraft in a bid to improve safety, prosperous firms were not so reactive, being less at risk and more able to survive a scandal.     Should Boeing be Concerned?   These findings are particularly interesting when looking at the industry today, as airlines consider their response to the recent air tragedies involving the Boeing 737 Max. After two fatal crashes and the worldwide grounding of the model, air carriers are faced with the costly decision of what to do next. The Boeing 737 Max is a relatively new model but one that has been widely accepted by airlines, particularly low-cost carriers. As of February 2019, 376 aircraft have been delivered and another 4,636 are on order. Already, Garuda Indonesia, Lion Air and a number of other carriers are reportedly dropping or reviewing their orders with Boeing.   However, given our findings and the fact that budget airlines, which make up the bulk of Boeing 737 Max’s top customers, are generally more profitable than full-service carriers, it is unlikely that too many airlines will cancel their orders. Southwest Airlines, the number one customer of the Boeing 737 Max, recently completed its 46th straight year of profitability. Ryanair, another top customer, posted a 2018 net profit of €1.45 billion, a 10 percent increase on the previous financial year. That flydubai, the Boeing 737 Max’s second biggest customer, has posted full-year profits since 2012 and came out earlier this month with assurances the aircraft remained integral to its future, further supports our findings.   The Ultimate Objective is the Firm’s Survival   While the results of our study may fly in the face of general expectations, they actually confirm the premise that when companies perform below aspirations (i.e. less profitably), managers become more risk averse and take actions aimed at improving their firm’s survival.   This is not to suggest that nervous travellers should bypass the more profitable, industry-leading carriers in favour of their less successful competitors. There is already good evidence that an airline’s safety record will decline when its margins or profitability are low. However, aircraft sales and buys are made at the top level of an organisation, by individuals who are well aware of the safety consequences of their actions and of the consequences that any accident will have on the firm. Senior managers may even suspect that cost-cutting occurring in other areas of the firm’s operations has the potential to endanger safety, and therefore attempt to compensate for that possibility when deciding what to do about aircraft replacement.   Ultimately, what our study found was that both safety and financial objectives are taken into consideration when airlines decide whether to replace aircraft models after an accident. The goal that triggers the stronger reaction is the one perceived as being more important for the firm’s survival.   Henrich R. Greve is a Professor of Entrepreneurship at INSEAD and the Rudolf and Valeria Maag Chaired Professor in Entrepreneurship. He is also the Editor of Administrative Science Quarterly and a co-author of Network Advantage: How to Unlock Value from Your Alliances and Partnerships. You can read his blog here.   Vibha Gaba is an Associate Professor of Entrepreneurship at INSEAD. She is also the Programme Director of Leading Successful Change and Learning to Lead, INSEAD Executive Education programmes.   Follow INSEAD Knowledge on Twitter and Facebook.   (We are now on your favourite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

First Among Equals: Differential Voting Rights in India

Kushan Chakraborty

Legal

India has for long held the common perception of one-share one-vote, which assumes that all shares represent an equal percentage of a corporation and that all shareholders have equal rights and obligations. This common perception is likely to witness some churn in the times to come. Capital markets regulator, Securities and Exchange Board of India (“SEBI”) has issued a consultation paper on issuance of shares with differential voting rights (“DVRs”) and invited public comment on it till April 20, 2019.   Let’s try and unbox the concept of DVRs.    DVRs or dual-class shares (“DCS”) is a system in which a single company may issue different classes of shares with distinct voting rights and dividend payments. Typically, the shares issued to general public are distinct from the shares issued to the founder(s)-promoter(s) and investor(s) in that the latter class may have higher voting power or more control over the company. Jurisdictions like the United States have had DCS structures for a few decades now, while others like Singapore and Hong Kong have recently jumped on that bus. Still others like the United Kingdom and Australia are more circumspect about the disparity the DCS structure creates between shareholders' economic and voting rights, and have thus far, not permitted it.   The American affinity to the DCS structure is, among other reasons, why certain companies like Manchester United (UK) and Alibaba (China) chose to list their IPOs on the New York Stock Exchange even though football and Alibaba aren’t as popular in the US. Others like Facebook and Google, which had huge listings in the US deployed versions of the DCS structure to great effect. On the other hand, SNAP Inc., the holding company of the massively popular social media app Snapchat by offering shares with no voting rights in its IPO, raised more than a few eyebrows.   Voting is important as it provides the shareholders control over the company’s affairs. Under the Companies Act, 2013, shareholders have the right to vote in matters relating to the company’s merger, appointment of director, amendments to the constitutional documents of the company, etc. In a single-class structure of shares, A with 10 votes will exercise the same degree of control over the company as B, with 10 votes. In a DCS structure, if A holds 10 shares with higher voting power, she will exercise a higher degree of control over the company than B, who may hold 10 ordinary shares.       Interestingly, SEBI had prohibited companies from issuing shares with “superior” rights with regard to voting and dividends in 2009. This had acted as a barrier to Indian companies issuing shares with DVRs, prompting several companies to list outside India in order to incorporate a DCS structure in relation to their shares. This new step by SEBI is seen by some as an attempt to make India a more friendly jurisdiction for Indian as well as foreign companies to incorporate and list.   SEBI's consultation paper proposes two routes for issuing DVRs:   For companies that are unlisted but propose to list on the stock exchange with DVR structures (primary listings); and For companies that are already listed that propose to list DVRs (secondary listings)   Further, it discusses the concepts of “superior” and “inferior” rights as to shares, i.e. when shareholders receive voting rights in excess of one vote per share, there would be share with superior voting rights (SR share) and conversely, with inferior or fractional voting rights (FR share).     SR shares may only be issued by unlisted companies and that too only to promoters. The idea is to ensure that promoters maintain more control via their voting rights in addition to their economic rights before the company opts to list its shares. Once the company is listed, it can no longer issue SR shares. There are a few other conditions related to SR shares:   Since they can only be issued to promoters, there can be no encumbrance over them. This means that promoters will not be permitted to pledge these SR shares for any debt funding.   They are restricted to a perpetual lock-in after the company’s IPO.   They can constitute a maximum ratio of 10:1, i.e. they cannot exceed 10 votes per share.   They will not carry superior voting rights on every matter. On certain matters, all shareholders (including those holding SR shares) must be subject to the default rule of one vote per share. These are crucial matters that are fundamental to the existence and business of the company. In this regard, SEBI has proposed some "coat tail" provisions, under which SR shares will be treated at par with ordinary shares and FR shares on matters such as appointment and removal of independent director or auditor, change of control, entering into a contract with a person holding SR shares, alteration of the constitution of the company, voluntary winding up of the company, etc.   SR shares will be subject to a sunset clause, under which they would automatically convert into ordinary shares at the end of 5 years from the date of listing, at which point their voting rights will become at par with ordinary voting rights. However, the life of SR shares may be extended for a further period of 5 years if the same is approved by a special resolution by all shareholders on a one-share one-vote Promoters, of course, have the discretion to accelerate the conversion of SR shares to ordinary shares.   The addition of the sunset clause highlights that DVRs are mostly required at the initial stages of a company’s lifecycle. During and immediately post incorporation, DVRs play an important role in enabling the promoters to assume business risks without ceding control. Subsequently, once the business is more established, shares with DVRs lose their purpose and are converted into regular shares. The sunset clause also has a corporate governance play, effectively preventing promoters from exercising control over a company by holding on to a small number of shares for a large period of time.   On the other hand, the paper proposes that FR shares may only be issued by companies whose shares have been listed on the stock exchange for at least a year. FR shares are usually issued to outsiders and investors who want control in the company. Voting rights on FR shares cannot exceed a ratio of 1:10, i.e. one vote for every 10 shares. Companies may pay a higher dividend on FR shares as an incentive for investors to opt for them, in lieu of lower control rights.     The paper also recommends amendments to the Companies Act, 2013 and various other SEBI Regulations such as those relating to capital issuances, continuous listing requirements, buyback and takeovers, which reflect the impact of DVRs on these legislative and regulatory provisions.   (We are now on your favourite messaging app – WhatsApp. We highly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Fending Off Disruption: Incumbent Strategies for Digital Transformation

Prof Sia Siew Kien

Digital

With all the hype about technological disruption, companies are scrambling to jump onto the digital transformation bandwagon. An international Ernst & Young survey, conducted on over 900 companies in 2017, shows that 90% of these companies are elevating digital priorities in their strategic planning over the next two years. But more digital initiatives do not mean stronger transformation strategy. You need to understand the tech disruption scenario in your specific context when you formulate an effective and targeted transformation strategy.     Clarify your Specific Tech Disruption Scenario   According to Professor Ron Adner (Tuck School of Business at Dartmouth College) and Rahul Kapoor (The Wharton School at the University of Pennsylvania), a specific disruption scenario is the unfolding of competitive forces between the new and the old technology ecosystems.   The greater the challenges (e.g., industry resistance, regulatory constraints) confronting the new technology ecosystem, the slower the disruption. The more positive the improvement prospects of old technology ecosystem, the more incumbents will remain relevant and competitive. The different competitive dynamics would yield four disruption scenarios - creative destruction, illusion of resilience, robust coexistence, and robust resilience.     Creative destruction will take place very rapidly, where the new tech ecosystem is emerging fast, and the old tech ecosystem is no longer relevant.   Illusion of resilience is a period of inactivity followed by rapid disruption, where the old tech ecosystem is no longer relevant but the speed of new tech ecosystem emerging is slow such that incumbents will continue its dominance until new entrants resolve emergence challenges.   Robust coexistence occurs when incumbents and new entrants each have their respective advantages because the new tech ecosystem is emerging fast and the old tech ecosystem is still relevant.   Robust resilience is the best scenario for incumbents, where the old tech ecosystem continues to be relevant and the speed of new tech ecosystem emerging is slow.   Align Your Digital Transformation Strategy   In the Creative Destruction scenario, passive participation is most sensible as it is too late for incumbents to develop new capabilities and old tech ecosystem is no longer relevant. Refusal to acknowledge the reality can be costly. For example, to counter the rapid rise of mobile payment in China (e.g., AliPay and WeChat Pay), ICBC, China’s largest bank, invested aggressively in its own payment app and e-commerce site, but still failed to challenge the dominance of these disruptors. Instead, incumbents should focus on their niches to do what they do best, and ensure that they are readily connectable to these platforms to participate in the growth of the disruptors.     In the Illusion of Resilience scenario, incumbents should focus on preemptive reinvention. This is what DBS Bank has done. To preempt disruption of its core retail banking business, it embarked on a radical tech transformation to ramp up its digital capabilities to be like a tech giant. It invested heavily on its people to be a 22,000 person startup. It also disintermediated itself to “make banking invisible” by embedding itself in the lives of its customers.   However, preemptive reinvention is still a defensive strategy. DBS is only getting itself on par with the tech disruptors in terms of new capabilities. With the disruptors’ entrance into Singapore (e.g., Ant Financial and Grad Pay), the competitive battle is still out there to be fought. But DBS would be so much more ready to compete with these disruptors now.   Under the Robust Coexistence scenario, incumbents still have some relevant old capabilities but have no time to develop new capabilities. On the other hand, disruptors have new capabilities, but they need access to some old resources. Since they each have a piece of the puzzle, incumbents should seek to develop a win-win strategic collaboration with the disruptors.   Citibank, for example, has accumulated deep global treasury management expertise in enterprise banking, but it lacks new digital capabilities. Its strategy was to actively seek partners with complementary capabilities. KJ Han, Chief Executive of Citibank Singapore, coined the term “fintegration” and noted that banks will need to become extraordinarily adept at integrating the best fintech innovations into their operations.                                                                Finally, in Robust Resilience, incumbents have the opportunity to entrench their competitive position through platform transformation, given their advantageous positions. For example, Ping An, China’s second largest insurance provider, went beyond reimbursement for social health insurance to build a digital ecosystem that integrated relevant healthcare services around its customers. Ping An Health Cloud facilitated sharing of patients’ electronic medical records across stakeholders (i.e., patients, clinics, insurance providers, government). Ping An Good Doctor enabled online medical consultation and Ping An Wanjia offered offline healthcare services linking to thousands of clinics. The transformation leveraged its strong incumbent advantages - high quality medical data, extensive hospital network, and massive user base.   Measure Your Digital Transformation Efforts   There is no one-size-fits-all digital transformation strategy. Ask these two questions:   Which tech disruption scenario are you encountering?   What should be the focus of your digital transformation strategy?   Such clarity should help you establish appropriate measures to track the progress of your transformation efforts (e.g., KPIs for “platform transformation” versus “preemptive reinvention”).   The truth is that your digital transformation efforts will be scrutinized! Unlike venture funding for startup disruptors, investments for incumbents’ digital transformation must come from its traditional businesses. You need to continuously justify the resource bridging from the old to the new. Having little to show after pouring in huge sum of money is the surest way to kill such transformations. You need to demonstrate real business value as you progress along.   Co-written with Mou Xu   About the authors   Sia Siew Kien is associate professor of Information Technology and Operations Management at Nanyang Business School at NTU Singapore while Mou Xu is research associate at the business school’s Asian Business Case Centre.   (We are now on your favorite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Inside Epic Games' Fortnite Battle Royale: Is "Free to Play" the Way to Go?

Prince Thomas

Digital

Fortnite has lately emerged as the poster child for the video gaming industry, having established itself as a cultural phenomenon within the community in a very short time. Even leading entertainment outlets such as Netflix now recognize it as competition.   With c. 200 million registered players, Fortnite was the biggest earner amongst video games, making $2.7bn of  revenue in 2018, as per Analyst firm SuperData. This is the highest annual earnings for any game, ever! The significance of this achievement is only escalated when one considers the fact that Fortnite is a "free to play" game and requires no initial investment from the consumer’s side.   Designed around a "games as a service" philosophy wherein games are viewed as a service rather than a product, somewhat like Hotstar which with its freemium model helps you enjoy select content free of charge, while members who pay get get access to exclusive content.     Along similar lines, Fortnite releases content on a regular basis in the form of battle-passes. The release of these passes are spread out over multiple seasons throughout the year and contain cosmetic items that alter the appearance of your character and weapon models. These cosmetics can be purchased via the in-game currency namely “V-bucks”, obviously exchangeable for real life money.   Paid vs Free   To put things into perspective, PUBG, the top earner in the premium games category earned around $1bn in  revenues  in 2018.   Note, premium games are those for which you need to pay an upfront cost, while free to play games are those that require no initial investment (like Fortnite).   Thereby vs. free to play games, PUBG doesn’t even make it to Top 10. Yes! There are more than 10 free to play games making more than $1bn in 2018 as compared to only one  in the premium list.   It has to be noted that while PUBG is a $20 title, it is also offered as a free to play version. In fact, a substantial portion of the money PUBG earned came from its free version that was released on mobile platforms. If we take away this portion of the revenue, PUBG won’t even cross the billion dollar mark.     An Example   The success of Fortnite and other similar free to play games has attracted attention of big players within the industry. There is a moving  focus towards games as a service model with a free to play "in-game economy".   A prime example of this shift would be publisher Electronic Arts’s latest release APEX Legends, one of Fortnite’s prominent competitors. The game was launched as free to play title in early February with zero publicity. Within a week of the launch, it snagged  over 25million registered players. Their shares surged more than 16% - their best single-day gain in more than four years.   Yet another example would be that of publisher Valve who launched a free to play version of their popular shooter CS:GO in 2018. This shifting trend is likely to continue throughout 2019 with more and more multiplayer titles like Unreal Tournament and Total War going the free to play way.   Will It Work?   Fortnite’s success bears testimony to the fact that free to play model can work and work rather well.   For multiplayer games that make most of their revenue through in-game transactions, charging for the game upfront makes little sense as it limits accessibility for potential players who might have tried the game had it been free of cost. Think of it this way, would 200 million players have bought Fortnite if they had to pay $20 for it? Would it bring in the same amount of money as it is bringing in now?   Shifting towards a free to play model would  allow publishers to reach a wider audience, which in turn will lead to increased customer engagement which in turn is bound to increase profits.   But Not Without its Challenges   Apart from the success that the free to play model brings in, it also brings with it its own set of problems. A free game like Fortnite will have more trouble keeping its player base engaged than a developer who has already charged an upfront cost. These games need to keep releasing content on a regular basis to prevent the formula from going stale. Also as the developer of a new “free to play” game, you run the risk of your game not being recognized among an ocean of similar releases. The more crowded the market gets, the more difficult it will become for the game to find an audience.     Conclusion   Is a company following Fortnite’s footsteps likely to enjoy the same amount of success? Not necessarily. Fortnite is a different beast in itself, it provides an environment for social interaction like none other. You can think of it as a Facebook for gaming. You log in at any time of the day, find your friends online and have a fun  time with them. But this time, the fun is more interactive and that’s what keeps bringing people back for more. The cosmetic system allows users to create unique designs for their characters and show it off to the world – a bit like Instagram this time .   There have been many attempts to replicate the Fortnite formula, but until now, none has succeeded. Additionally the developer, Epic Games also puts in considerable amount of resources in the analysis of player data obtained through feedback to constantly evolve the game into something that the players want. This truly highlights their achievement in keeping their large customer base engaged since the game’s release.   The benchmarks set by Fortnite might be hard to replicate as it launched into a market which had a demand, but offered little competition. However, that’s not to take away from the fact that free to play titles can’t succeed in crowded markets. Take for example League of Legends - a free to play game launched in 2009. The game brought in around $1.4 billion in 2018 in a market crowded with similar games.   Thanks to Fortnite, gaming has gone from being a niche market to a mainstream one, with more players coming in every day, bringing in different tastes and preferences on the type of games they like to play. This provides the publishers with enough breathing room to make good money without stepping on each other's toes. Video Games taking the free to play route will have an opportunity to attract a wider audience, all the while bringing in more money.   (We are now on your favourite messaging app – WhatsApp. We highly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

The Strategic Decisions That Caused Nokia’s Failure

INSEAD Knowledge

Tech

By Yves Doz, INSEAD Emeritus Professor of Strategic Management   The moves that led to Nokia’s decline paint a cautionary tale for successful firms.   In less than a decade, Nokia emerged from Finland to lead the mobile phone revolution. It rapidly grew to have one of the most recognisable and valuable brands in the world. At its height Nokia commanded a global market share in mobile phones of over 40%. While its journey to the top was swift, its decline was equally so, culminating in the sale of its mobile phone business to Microsoft in 2013.   It is tempting to lay the blame for Nokia’s demise at the doors of Apple, Google and Samsung. But as I argue in my latest book, "Ringtone: Exploring the Rise and Fall of Nokia in Mobile Phones", this ignores one very important fact: Nokia had begun to collapse from within well before any of these companies entered the mobile communications market. In these times of technological advancement, rapid market change and growing complexity, analysing the story of Nokia provides salutary lessons for any company wanting to either forge or maintain a leading position in their industry.     Early Success   With a young, united and energetic leadership team at the helm, Nokia’s early success was primarily the result of visionary and courageous management choices that leveraged the firm’s innovative technologies as digitalisation and deregulation of telecom networks quickly spread across Europe. But in the mid-1990s, the near collapse of its supply chain meant Nokia was on the precipice of being a victim of its success. In response, disciplined systems and processes were put in place, which enabled Nokia to become extremely efficient and further scale up production and sales much faster than its competitors.   With a young, united and energetic leadership team at the helm, Nokia’s early success was primarily the result of visionary and courageous management choices that leveraged the firm’s innovative technologies as digitalisation and deregulation of telecom networks quickly spread across Europe. But in the mid-1990s, the near collapse of its supply chain meant Nokia was on the precipice of being a victim of its success. In response, disciplined systems and processes were put in place, which enabled Nokia to become extremely efficient and further scale up production and sales much faster than its competitors.   Between 1996 and 2000, the headcount at Nokia Mobile Phones (NMP) increased 150% to 27,353, while revenues over the period were up 503%. This rapid growth came at a cost. And that cost was that managers at Nokia’s main development centres found themselves under ever increasing short-term performance pressure and were unable to dedicate time and resources to innovation.   While the core business focused on incremental improvements, Nokia’s relatively small data group took up the innovation mantle. In 1996, it launched the world’s first smartphone, the Communicator, and was also responsible for Nokia’s first camera phone in 2001 and its second-generation smartphone, the innovative 7650.     The Search for an Elusive Third Leg   Nokia’s leaders were aware of the importance of finding what they called a “third leg” – a new growth area to complement the hugely successful mobile phone and network businesses. Their efforts began in 1995 with the New Venture Board. But this failed to gain traction as the core businesses ran their own venturing activities and executives were too absorbed with managing growth in existing areas to focus on finding new growth.   A renewed effort to find the third leg was launched with the Nokia Ventures Organisation (NVO) under the leadership of one of Nokia’s top management team. This visionary programme absorbed all existing ventures and sought out new technologies. It was successful in the sense that it nurtured a number of critical projects which were transferred to the core businesses. In fact, many opportunities NVO identified were too far ahead of their time; for instance, NVO correctly identified “the internet of things” and found opportunities in multimedia health management – a current growth area. But it ultimately failed due to an inherent contradiction between the long-term nature of its activities and the short-term performance requirements imposed on it.   Reorganising for Agility   Although Nokia’s results were strong, the share price high and customers around the world satisfied and loyal, Nokia’s CEO Jorma Ollila was increasingly concerned that rapid growth had brought about a loss of agility and entrepreneurialism. Between 2001 and 2005, a number of decisions were made to attempt to rekindle Nokia’s earlier drive and energy but, far from reinvigorating Nokia, they actually set up the beginning of the decline.   Key amongst these decisions was the reallocation of important leadership roles and the poorly implemented 2004 reorganisation into a matrix structure. This led to the departure of vital members of the executive team, which led to the deterioration of strategic thinking.   Tensions within matrix organisations are common as different groups with different priorities and performance criteria are required to work collaboratively. At Nokia, which had been acccustomed to decentralised initiatives, this new way of working proved an anathema. Mid-level executives had neither the experience, nor the training in the subtle integrative negotiations fundamental in a successful matrix.   As I explain in my book, process trumps structure in reorganisations. And so reorganisations will be ineffective without paying attention to resource allocation processes, product policy and product management, sales priorities and providing the right incentives for well-prepared managers to support these processes. Unfortunately, this did not happen at Nokia.   NMP became locked into an increasingly conflicted product development matrix between product line executives with P&L responsibility and common “horizontal resource platforms” whose managers were struggling to allocate scarce resources. They had to meet the various and growing demands of increasingly numerous and disparate product development programmes without sufficient software architecture development and software project management skills. This conflictual way of working slowed decision-making and seriously dented morale, while the wear and tear of extraordinary growth combined with an abrasive CEO personality also began to take their toll. Many managers left.   Beyond 2004, top management was no longer sufficiently technologically savvy or strategically integrative to set priorities and resolve conflicts arising in the new matrix. Increased cost reduction pressures rendered Nokia’s strategy of product differentiation through market segmentation ineffective and resulted in a proliferation of poorer quality products.     The Swift Decline   The following years marked a period of infighting and strategic stasis that successive reorganisations did nothing to alleviate. By this stage, Nokia was trapped by a reliance on its unwieldy operating system called Symbian. While Symbian had given Nokia an early advantage, it was a device-centric system in what was becoming a platform- and application-centric world. To make matters worse, Symbian exacerbated delays in new phone launches as whole new sets of code had to be developed and tested for each phone model. By 2009, Nokia was using 57 different and incompatible versions of its operating system.   While Nokia posted some of its best financial results in the late 2000s, the management team was struggling to find a response to a changing environment: Software was taking precedence over hardware as the critical competitive feature in the industry. At the same time, the importance of application ecosystems was becoming apparent, but as dominant industry leader Nokia lacked the skills, and inclination to engage with this new way of working.   By 2010, the limitations of Symbian had become painfully obvious and it was clear Nokia had missed the shift toward apps pioneered by Apple. Not only did Nokia’s strategic options seem limited, but none were particularly attractive. In the mobile phone market, Nokia had become a sitting duck to growing competitive forces and accelerating market changes. The game was lost, and it was left to a new CEO Stephen Elop and new Chairman Risto Siilasmaa to draw from the lessons and successfully disengage Nokia from mobile phones to refocus the company on its other core business, network infrastructure equipment.   What Can We Learn From Nokia   Nokia’s decline in mobile phones cannot be explained by a single, simple answer. Management decisions, dysfunctional organisational structures, growing bureaucracy and deep internal rivalries - all played a part in preventing Nokia from recognising the shift from product-based competition to one based on platforms.   Nokia’s mobile phone story exemplifies a common trait we see in mature, successful companies: Success breeds conservatism and hubris which, over time, results in a decline of the strategy processes leading to poor strategic decisions. Where once companies embraced new ideas and experimentation to spur growth, with success they become risk averse and less innovative. Such considerations will be crucial for companies that want to grow and avoid one of the biggest disruptive threats to their future – their own success.   Yves Doz is an Emeritus Professor of Strategic Management at INSEAD. He is the programme director for the Managing Partnerships and Strategic Alliances programme.   Follow INSEAD Knowledge on Twitter and Facebook.   This article is republished courtesy of INSEAD Knowledge. Copyright INSEAD 2018.   (We are now on your favourite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Startup India Standup India: A New Dawn

Premansh Sahni

Start-ups

Too lazy to go to the market? Dunzo has you covered. (a+b)^2 seems like a mystery? Byju’s has a solution for you. Short on cash? Paytm Karo!! ‘Startup’ has become the buzzword among millennials and for all the right reasons.   India ranks third globally in terms of its ‘startup ecosystem’ after USA and China. We have made huge strides in the Global Innovation Index and are currently placed 57th on the list. 2018 alone witnessed c. 1,200 startups spring up in India, along with more than 50% growth in the number of ‘Advanced Tech’ startups.   2018 also saw an increase in the number of unicorns or startups that are valued at over $1bn, with 8 startups including Freshworks, OYO, Swiggy, Billdesk, Policybazaar, Udaan and Byju’s entering the unicorn club. In 2019, more startups like Practo, Big Basket and MobiKwik are expected to cross this much-coveted mark. This represents a paradigm shift in the Indian economy.   Startups usually choose Bangalore, Delhi & NCR and Mumbai as their prime locations. However, with an expansion in technology and push for skill training, cities like Hyderabad, Pune, Chennai, Chandigarh, Jaipur, Indore, Kochi etc. are also emerging as startup hubs. Opening up of quality education centres, engineering and management colleges along with initiatives such as incubation centers like IIM-C Innovation Park, RICH (Research and Innovation circle of Hyderabad), improving IT and telecom infrastructure, government-backed programmes like the Atal Tinkering Labs, Fund of Funds for Startups, tax exemption schemes and National Initiative for Development and Harnessing Innovations etc. provide the sector with a much-needed filip. An established network of angel and venture capital investors like the Calcutta Angels Network only further strengthen the case.     Merger & Acquisition activity has also picked up momentum in the startup industry of late. Walmart's $16bn acquisition of Flikart made headlines. Such merger and acquisition deals create valuable synergies in the market as companies combine their tech capabilities, expand their market coverage and tap the human resource potential.   You might have heard of the show ‘Shark Tank’ where budding entrepreneurs pitch their ideas to a panel of investors persuading them to invest their money in their idea. Truly, the market for startup funding is nothing short of a shark tank. But the latest trends look promising. Asia has become the new startup center for global investors led by China. This has resulted in increased global investment in Indian startups. The total amount of funding in Indian startups has increased over 108% from $2bn in 2017 to $4.2bn in 2018. OYO Rooms was the biggest gainer, raising $1bn in funding from Softbank, Lightspeed and Sequoia Capital, making it the most valuable startup in India. It was followed by Paytm Mall, Swiggy, Udaan, Curefit, Sharechat, Lending Kart, Grofers and Qtrove.   There has also been a perceptible change in the way we view Entrepreneurship and Innovation. Earlier, the society harbored a very conservative view and the very idea of opening a startup leaving behind a settled job and life was frowned upon. Funding was also a major roadblock.   Back in the 2000s, India’s network of Seed and Angel investors, Venture Capitalists and Private Equity investors was very weak. But the times are changing. Creativity is actively encouraged as evident from the increasing number of straight out of college entrepreneurs, government initiatives, increasing number of entrepreneurship competitions in schools and colleges, support from faculties and mentors in premier institutions like IITs and IIMs, and most importantly failure being seen as a stepping stone to success. Today 35% of the startup founders are engineering and MBA graduates. There has also been a consistent rise in the number of women entrepreneurs with their share increasing to 14%. Rashmi Daga, winner of the ET-Facebook Woman Ahead, and Founder of FreshMenu is one such example.   According to a report by the World Economic Forum and Bain & Company, India by 2030 will have over 1 billion active internet users, who will have greater access to a variety of goods and services than their predecessors. Startups need to leverage this trend. Growing telecom industry, plummeting data costs and enhanced use of mobile internet services will facilitate the growth of startups. If India wants to become a global economic powerhouse and prevent its increasing demographic dividend from turning into a curse, it must create more jobs. The flourishing start up sector can be a answer to the growing unemployment in the country. The sun has just risen for startups and there are miles ahead to go…...   (We are now on your favourite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Transfin. LongShorts Podcast E:33

We like to talk Business and Finance. Figured we should do it for a living.

Transfin. Podcast E32: Cash Burn, Mixed Signals, Robo Cab

Professor S

LongShorts

Transfin. Podcast E31: Jet Set, Too Rich, Hot Ticket

Professor S

LongShorts

Transfin. Podcast E30: Core, Offline, Portside

Professor S

LongShorts

Transfin. Podcast E26: Glitch, Translate, NewTube

Professor S

LongShorts

   We like to talk Business and Finance. Figured we should do it for a living.   Sunday kicked off the Festival/Dance/Carnival (you get the point!?) of Democracy. But we spent our time discussing:     The Troubles of Boeing   Boeing’s reputation has once again come into question following the recent Ethiopian Airlines crash that came barely six months after the Lion Air crash late last year. The proximity of these two crashes involving the same aircraft model has led to countries around the world grounding the Boeing 737 MAX. We discuss what this could mean for Boeing and how its competitors potentially stand to gain from the entire episode.   How RBI's Rate Cut Not Hitting your Loan has Something to Do with Your FD     Of late, mutual funds have witnessed an increasing churn and FD is emerging as the favoured asset class. One reason could be equities and riskier asset classes not performing in line with expectation. Another could be the slump in savings which has naturally incentivised banks to have a high deposit rate. Within this background we discuss why RBI's rate cut is not translating on ground despite multiple reminders from the Central Bank.      YouTube's Two Pronged Indian Foray   Google recently launched its YouTube Music, YouTube Music Premium, and YouTube Premium services in India. The ad-supported version of YouTube Music will be free, YouTube Music Premium will cost users INR99 per month, and YouTube Premium will cost INR129 per month. However, will YouTube's latest offerign for India be able to survive under thick competition from rivals such as Amazon Music, Apple Music, JioSaavn, Gaana and Spotify. Listen in.    More on the Comet Aircraft: de Havilland Comet   (We are now on your favorite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Hot Water, Conditions Applied, Collection O and More...

Professor S

TheWeekThatWas

US Grounds All Boeing 737 MAX Models, Embassy Group to Launch India's First REIT, UK Votes to Delay Brexit, Qualcomm Wins Patent Infringement Case Against Apple et al.

Professor S

End Of Day Wrap Up

US grounds all Boeing 737 MAX models. Boeing at risk of losing orders worth $600bn. China’s industrial output drops to 17-year low. UK votes 413-202 to delay Brexit. Embassy Group to launch REIT to raise INR4,750cr. Tesla may come to India soon. Apple to pay Qualcomm c. $31m for patent infringement. Spotify files antitrust complaint against Apple.   Moving on to the top Business stories of the week.    BOEING US grounds all Boeing 737 MAX models. Boeing at risk of losing orders worth $600bn; plans to release upgraded software for its 737 MAX in the next 10 days. 737 Max revamp could cost Boeing over $2.5bn. Last Man Down: President Trump has announced that the Federal Aviation Administration would ground Boeing’s fleet of 737 MAX airliners after new data indicated that last weekend’s deadly crash in Ethiopia resembled the Lion Air crash in October last year. All Right: Boeing said it maintained full confidence in the MAX plane but decided to recommend a temporary grounding to reassure the flying public. Since the crash on Sunday, regulators in dozens of countries have suspended flights by the single-aisle airliners, including the UK, Australia, and Canada. The US was the last significant aviation market still allowing the 737 MAX to operate. Buyers Beware: Boeing’s more than $600bn worth orders for its 737 MAX models hang in balance as customers threaten to reconsider their purchases in light of the recent Ethiopian Airlines crash. These Include:   VietJet Aviation, which doubled its order to about $25bn only last month Kenya Airways, which is reviewing proposals to buy the Max and could switch to Airbus A320 Utair Aviation, which is seeking guarantees before taking delivery of the first of 30 planes Lion Air, which decided to drop a $22bn order for the 737 in favor of the Airbus jet   Damage Control: Boeing plans to release upgraded software for its 737 MAX within the next 10 days.   This is Why: Boeing has been working on a software upgrade for an anti-stall system and pilot displays on 737 MAX aircrafts in the wake of the deadly Lion Air crash in Indonesia in October. Read more on the matter here.  Costing Dearly: The software fixes on Boeing 737 MAX models is likely to cost the Chicago-based aircraft over $2.5bn as it redesigns a computerised flight-control system on hundreds of jets.   Boeing could end up spending $500m on software rejig alone. Another $2bn could go towards delivery delays and reimbursements to airlines for flight disruptions.   GLOBAL China’s industrial output drops to 17-year low. Unemployment rate also on the rise. UK votes 413-202 to delay Brexit. Long Time Coming: China’s industrial output growth fell to a 17-year low in the first two months of the year. The industrial output rose 5.3% in January-February vs 5.7% in December, lower than the expected 5.5% growth. Silver Lining: Growth in fixed-asset investment, a major growth driver in the past, increased to 6.1% in the first two months of this year, up marginally from 5.9% in 2018. Retail sales were also marginally better than expected, with the headline figure rising 8.2% in January-February from a year earlier. More Help: In addition to a fiscal stimulus such as higher local government spending and tax cuts, more monetary policy support is also expected this year. No Jobs: China’s unemployment rate jumped to 5.3% in February from 4.9% in December on the back of low industrial output and slow retail sales expansion. Perfect Timing: The jump in joblessness comes just days after Premier Li Keqiang announced an employment first strategy as a key part of the economic policy for the coming year. Side Effect: The increase of the unemployment rate shows rising pressure from the US-China trade war on China’s job stability.  Fun Fact: China combines January and February activity data in an attempt to smooth distortions created by the long Lunar New Year holidays early each year, but some analysts say a clearer picture of the economy’s health may not emerge until first-quarter data is released in April.    Wait: UK lawmakers have voted 413-202 in favour of delaying the Brexit process, acknowledging that more time is needed to break the deadlock over Britain’s departure from the European Union (EU). All For: This allows Prime Minister Theresa May to approach the EU for an extension to Article 50, the legal process under which Britain is leaving the EU.  May had been forced to offer MPs a vote on delaying Brexit after they rejected her withdrawal agreement by a large margin, for a second time, and then voted to reject a no-deal Brexit.  Ask Again: A request for an extension of Article 50 will require the unanimous agreement of all 27 EU Member States.  Opinion: With Brexit now on hold, there’s only one option left - compromise.   COMPANIES Brookfield to acquire RIL’s East-West pipeline. Tesla may come to India soon. OYO acquires Innov8 in a INR220cr deal. Google launches YouTube Music, YouTube Music Premium, and YouTube Premium in India.   Costly Deal: Canada-based Brookfield Asset Management’s India Infrastructure Trust, an InvIT is set to acquire RIL’s East-West Pipeline (EWPL) in an INR13,000cr deal. Brookfield has filed a preliminary placement memorandum, through which its InvIT will invest INR13,000cr to acquire EWPL. As part of the transaction, the InvIT will acquire 100% equity in Pipeline Infrastructure Private Ltd (PIPL), which currently owns and operates the pipeline. RIL will get the right to acquire equity shares of PIPL, held by the InvIT at an equity value of INR50cr, at the end of 20 years.   Tesla in India: Replying to a query on Twitter, Elon Musk replied that the company would love to be in India this year. If not, definitely next.   Interestingly, the comment comes months after he blamed restrictive policy for delaying the carmaker’s entry into the world’s fourth largest automobile market – India.   The Deal: Hospitality chain OYO has acquired Gurugram-based coworking startup Innov8 in an all-cash deal worth about INR220cr.  Under OYO, Innov8 will be working towards creating a capacity of 10,000 seats across India.  Founded in 2015, Innov8 currently hosts over 350 companies as members and claims to have a 95% occupancy across all its 15 coworking spaces with combined seating of approximately 5,500.  Besides, OYO itself has also started two new co-working brands—PowerStation and WorkFlo, which will cater to a variety of start-ups and companies.   Another One: Google launched its YouTube Music, YouTube Music Premium, and YouTube Premium in India.   Following Suit: The news comes shortly after the launch of Spotify in India. The ad-supported version of YouTube Music will be free, YouTube Music Premium will cost users INR99 per month, and YouTube Premium will cost INR129 per month. Subscription to YouTube Premium also unlocks features like the ability to play videos in the background while running other apps, offline downloads, access to YouTube Originals, and YouTube Music Premium. Long Road: In India, the YouTube services will compete against other streaming platforms including Amazon Music, Apple Music, JioSaavn, Gaana and Spotify.   TECHApple to pay Qualcomm c. $31m for patent infringement. Spotify files antitrust complaint against Apple. Google is likely to be hit with a third EU antitrust fine next week. Softbank to start a new fund for early-stage investments.   No Escape: Following a two-week trial, a jury in federal court in San Diego determined that Apple had violated three Qualcomm patents in some iPhones.  What You Need to Know: Qualcomm had last year sued Apple alleging it had violated patents related to allowing phones to quickly connect to the internet after they're switched on; battery efficiency and graphics processing; and a traffic management function that allows apps to download data faster. Qualcomm asked the jury to award it unpaid patent royalties of up to $1.41 per iPhone that violated the patents.  Unfair: Spotify has filed an antitrust complaint in the EU against Apple.  What’s the Case?: Spotify alleges that Apple in recent years has abused its control over which apps appear in its App Store with the aim of limiting competition with its streaming service Apple Music. Spotify claims that Apple made it difficult for rival subscription services to market themselves to users without using Apple’s payment system, which generally takes a 30% cut of transactions. Threats: The European company also said Apple at times rejected security updates of its app and threatened to kick it out of the App Store for allegedly anticompetitive reasons.   Not the First: Google is likely to be hit with a third EU antitrust fine next week related to its AdSense advertising service.  However, the sanction expected to be much smaller than previous fines.  Backstory: The European Commission had in 2016 opened a third case against the world’s most popular internet search engine by accusing Google of preventing third parties using its AdSense product from displaying search advertisements from Google’s competitors.  It said that Google, which at that time had held 80% of the European market for search advertising intermediation over the previous ten years, had kept its anti-competitive practices for a decade.   SoftBank: Is set to launch a new global investment fund for early-stage investments. The fund, to be run by Seoul-based SoftBank Ventures Asia, will be worth $500m. SoftBank, South Korea’s National Pension Service as well as other companies and asset management firms will invest in the fund. Brookfield to acquire RIL’s East-West pipeline. Tesla may come to India soon.   INVESTMENTSOYO invests INR1,400cr in India and South Asia businesses, launches millennial-focused new brand, Collection O. Embassy Group to launch REIT to raise INR4,750cr. Hospitality firm OYO has committed over INR1,400cr for its India and South Asia businesses.  Need to Grow: The infusion would be used to facilitate expansion plans, improve customer experience and ensure increase in continued asset owner success. yOung Ones: OYO also announced a new category called Collection O, which will focus on millennials and young travelers and will be added to its existing portfolio of budget and mid-segment hotels. Going Desi: Global investment giant Blackstone and Bangalore-based real estate developer Embassy to launch India’s first real estate investment trust (REIT) next week to raise about INR4,750cr The REIT, which includes Embassy Group properties, will offer as many as 158.6 million units at c. INR300 apiece. Bonus: REITs offer a unique 'financial markets' avenue to gain exposure to real estate rather than requiring investors to buy physical assets. Click here to understand all about the new way to invest in real estate in India.    (Don't want to miss out on these End Of Week Wrap Ups? Subscribe Now to our No Nonsense Email Digest and get the day's Top 6 Business stories straight to your mailbox.)

Boeing 737 Max 8 Crash: An Escalating Crisis

Professor S

LongShorts

Background: Less than six months after the crash of the Lion Air flight which killed over 189 people, Boeing finds itself in hot water yet again. This time around post the unfortunate crash of the Ethiopian Airline-operated aircraft last Sunday, which left no survivors.   Not the First: The Ethiopian Airline-operated flagship model 737 MAX 8 carrying 149 passengers aircraft heading for Nairobi crashed barely six minutes after its takeoff. The same model was involved in another crash late last year when a Lion Air flight from Jakarta crashed into the Java Sea killing all 189 passengers onboard. The proximity of these two crashes coupled with the questionable MCAS (Maneuvering Characteristics Augmentation System) anti-stall flight control system has put the spotlight back on Boeing.     Big Earner: The 737 MAX series launched by Boeing in 2016, was built with the aim of providing better fuel economy for short distance fights, in a bid to take on its rival - Airbus A320neo - a similar offering, which then dominated the market. The 737 MAX series has been the most popular of Boeing’s offerings with airlines around the world, having delivered 354 of the jets globally, and with 2,912 still on order. However, it needs to be noted that many of the orders in line may now have been put on hold or been cancelled in the aftermath of the tragedy.   Faulty System: As per the  US Federal Aviation Administration, at the center of both the crashes is the MCAS (Maneuvering Characteristics Augmentation System) anti-stall mechanism, a system that takes in data from various sensors to adjust flight parameters in order to ensure that the engines do not stall and the plane keeps flying.    After the first crash, it was found that the system could be triggered by erroneous data from sensors, which could have led it to operate in an unintended manner. Data from the second crash also shows evidence that faulty inputs from the system might have contributed to the crash. If investigations could conclusively prove that MCAS had a hand in the crash, all 737 MAX models would need to be grounded. This could mean serious trouble for Boeing for whom the 737 MAX series of aircraft have been the most successful. It has been estimated that Boeing’s more than $600bn worth orders for its 737 MAX models hang in balance as customers threaten to reconsider their purchases.     Pilots Unaware: Another grim picture is painted by the pilots who have previously flown the 737 and claim that the MCAS was not included in the training program, and that most of them came to know of the system only after the first crash. This would mean that pilots operating a 737 flight would not have had adequate knowledge or training to correct any deviation caused by the MCAS.   But the MCAS might not be the only malfunctioning component of the 737. Some pilots report that they had other issues with the plane concerning the autopilot system, which is independent of the MCAS. This could mean that there could still be some unknown variable other than MCAS which is affecting the aircraft.   Dire Consequences: In light of the incident, several countries including US, EU, China, Indonesia, India and Ethiopia have grounded all MAX aircrafts. Boeing is likely to be the most affected by the ban in China as more than half of the 737 MAX orders come from the country. Analysts predict that if investigations lead to grounding of all 737 models, Boeing could lose up to $5bn, which amounts to 5% of the company’s annual revenue. Ironically, Boeing stock had recently ridden to nearly a record high on the 737 Max’s early sales.   Meanwhile, rival Airbus stands much to gain from Boeing’s affliction, whose similar offering - the A320neo has had a better service history than the 737.   (We are now on your favourite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Draft National E-Commerce Policy: Treading the Thin Line Between Regulation and Restriction

Professor S

LongShorts

Backdrop: The government recently released a draft e-commerce policy for stakeholder comments, barely two months post rollout of its new FDI policy which shook the nascent industry, especially major players. The Draft deliberates upon a comprehensive framework aiming to preserve consumer interest by creation of suitable regulatory mechanisms.   However, like most policy actions, it walks on a fine line of managing citizen interest at the expense of creating a less-than-conducive regulatory environment for the industry at-large - one that may not allow the country to reap the maximum benefit from the rapid digitalization of the domestic, as well as the global economy.   Let’s Start from the Start: India’s burgeoning e-commerce market was valued at $38.5bn in 2017 and is estimated to rise to $200bn in 2026. Electronic commerce and data are emerging as key enablers and critical determinants of India’s growth and economic development, facilitated by cheap smartphones and even cheaper data.   Here’s a rundown of the key points:   On Point?: Six broad issues have been touched, including i) data, ii) infrastructure development, iii) e-commerce marketplaces, iv) regulatory issues, v) promotion of domestic digital economy and vi) export.   Data is the New Oil: There’s an overwhelming push for a robust administrative, regulatory and legal mechanism to control data flows. The word “data” itself has been quoted more than 200 times within the 42 page document. The principal case has been that an individual consumer/user who generates data retains ownership rights over it.   Viewed in conjunction to the Personal Data Protection Bill submitted to the government (for consideration by the Justice BN Srikrishna Committee on 27th July 2018), the policy at the least envisages to regulate cross-border data flow while enabling sharing of community data (data collected by IoT devices installed in public spaces like traffic signals or automated entry gates).     Breaking it Down:   No data collected or processed in India shall be made available to a third party or to other business entities outside India, for any purpose, even with the consent of the customer. Neither can it be made available to a foreign government, without the prior permission of Indian authorities. The document, however, is light on details around potential implementation mechanisms. Push for a three-year data localization requirement    Backbone: Development of data-storage facilities/infrastructure is another vital part of the value chain recognized.   Data centres, server farms, towers and tower stations, equipment, optical wires, signal transceivers, antennae etc. will be accorded ‘infrastructure status’ – facilitating access to longer maturity loans, easier lending terms, and even cheaper foreign currency funding through the external commercial borrowing route. An ‘infrastructure status’ also seeks to streamline the regulation of the sector. Budgetary support to be provided for the exploration of domestic alternatives to foreign-based clouds and email facilities   Supply Chain Transparency:    To streamline functioning of the e-Commerce sector under the FDI Policy, e- commerce websites/applications are required to ensure that all product shipments from other countries to India must be channelized through the Customs route. The Policy provides for integrating Customs, RBI and India Post systems to improve tracking of imports through e-Commerce. All ecommerce sites/apps operating in India must have a registered business entity in India as the importer on record or as the entity through which all sales in India are transacted. All parcels under the ‘gifting’ route to be banned, with the exception of life-saving drugs. This move comes in light of companies exploiting India’s “gifting” rule whrein personal gifts priced below INR5,000 are exempt from duties. Several red flags have been raised in the recent times over numerous “gift” deliveries being made to the same address and heavy 15 kilogram parcels being brought in with a declared value of just INR3,000.     Watchdog: Given the inter-disciplinary nature of the sector, a Standing Group of Secretaries on e-Commerce (SGoS) would be appointed to regulate the issues effectively. No standalone regulator proposed, so far.   Bonus: The Policy also proposes regulation of advertising charges in e-commerce (including social media platforms), to create a “level-playing field” for small businesses, who otherwise have to allocate an excessively high proportion of their budget and working capital to advertising to find their potential customers. In our view such a stance borders on regulatory overreach and we’d be very wary of its detailed wording, whenever it comes out.   In Conclusion   While the draft ecommerce policy means well for MSMEs and startups who seek to break through the competitive space, it is also likely to increases their compliance costs having to restructure means of how they store and share data.   Giants such as Amazon and Flipkart will likewise be hit in a significant manner, forced to make huge changes to comply with the proposed rules, even as they have often been known to find legal or other ways to circumvent potential downsides.   The enhanced cost of compliance may also have an adverse bearing on the rate of investment in Indian e-commerce, specifically on FDI inflows.   As for the consumers, the policy seeks to offer some respite with strong anti-counterfeiting and anti-piracy measures, pushing e-tailers to publicly share all relevant details of the sellers listed on their portals and ensuring speedy redressal of consumer grievances.   While the draft at multiple instances reiterates the need for the creation of a facilitative regulatory environment for growth of e-commerce sector, it falls short of providing specific details on implementations or addressing any operational nuances.     Moreover, the manner in which the policy addresses the question of ownership of personal data has been termed as “unusually parochial”, often directly at odds with the recommendations of the Justice Srikrishna Committee and the decision of the Supreme Court in its right to privacy judgement.   With the Department for Promotion of Industry and Internal Trade having kicked-off a round of stakeholder consultations on the draft policy, one can only hope that the future iterations don’t propose ham-fisted solutions to problems, rather push for a more definite and implementable framework.   (We are now on your favourite messaging app – WhatsApp. We strongly recommend you SUBSCRIBE to start receiving your Fresh, Homegrown and Handpicked News Feed.)

Social Media, Preferential Trade, Spotify Debut and More...

Professor S

TheWeekThatWas