Search

misreportedandmisremembered

Bringing context and perspective to the chaos

The Power of a Buck

In theory, the creation of currency facilitated exchange between two individuals and provided a tangible store of value for any person wanting a transaction, but offering a service instead of a direct good. As the worlds of early societies shrank, groups and cultures with differentiated economies began to seek formal relationships, a new problem arose – what was one unit of currency A worth when held up against one unit of currency B? Foreign exchange rates play a much more prominent role in currency denominations today than when gold used to be exchanged for grain. But prominence has come hand in hand with complexity. Not the least of which is that now the currencies underpinning financial products themselves offer direct indications of the health of both the product being sold and the nations whose in whose currency these products are underpinned.

In this, it becomes apparent that certain currencies have greater value per unit than others – the primary example being the American dollar, the global standard for trade. The US dollar has risen to prominence because, amongst other reasons, it is the primary currency used in commodities markets, such as gold and petroleum, and the growth of the American economy since the Second World War has ensured that the dollar poses a secure base for transactions given it’s historical stability and widespread usage. These factors have contributed to the American dollar becoming the unofficial global reserve currency and the official currency adopted in developing nations across the globe.

Here, an ironic turn emerges – the stability of the dollar, and therein dollar-denominated debts, is both a blessing and a curse. A ‘strong’ dollar (a comparative measure indicating when the US dollar has risen to historically high levels relative to other currencies) has numerous ripple effects within surrounding economies. A risk emerges when examining companies with reserves or debt denominated in foreign currencies – a strong dollar acts as a trend amplifier that is strongly felt within capital reserves, causing markets to flee to the dollar when faced with the prospects of a bearish market. This stampede and ensuing dollar binge only exacerbates the existing problem and sees greater amplification of markets fleeing towards ‘stability’ (debt often denominated in either American dollars or Japanese Yen, another notoriously stable currency).

A strong dollar is typically met with raises in interest rates from the Federal Reserve, therein increasing the cost of borrowing the dollar and (hopefully) acting as a stabilizing force within the market to reduce the dollar binge. But an ascendant greenback also stunts inflation, thereby making foreign goods cheaper for any individual or entity holding the dollar. This means that while markets adjust to interest rate hikes, consumer behaviour and buying patterns can continue to act as a short-term counter-measure to the efforts of central bankers. An additional impact is being felt on foreign shores – sharp falls in currencies force central banks to either raise interest rates in an effort to prevent depreciation and avoid the worst of ongoing deflation. Alternatively, a strong dollar can force economies to lower interest rates to historic lows in an effort to attract investment and spur domestic spending when individual currencies become ‘weaker’ and the decrease in the real value of savings hits the portfolios of consumers. This puts the Federal Reserve in an awkward position – raising interest rates risks undercutting global economic growth, thereby seeing downturns in economies currently struggling who lack the traditional fiscal and monetary mechanisms to mitigate adverse impacts. But keeping interest rates low fails to address the issues currently simmering that threaten to cripple growth across the globe.

Why does this matter? The number of currencies that move in line with the greenback encompass 60% of the world’s total GDP. The volume of dollar denominated bonds held in developing markets amounts to over $3T and each time the dollar rises, so does the cost of servicing those debts. A strengthening dollar could see a spiral emerge of capital outflows, now going towards paying debts instead of domestic investment, and the resulting fall in asset prices could lead to an economic downturn similar to that experienced by Brazil during the last 5 years. Presidential promises to lower corporate tax rates expressly aimed at American companies in order to facilitate the repatriation of earnings onto American soil would see rocket fuel added to the rise of the greenback, and protectionist measures would directly impact American consumers as the cost of imported goods increased and export growth continued to decline.

Despite all the talk of trade deficits throughout the 2016 presidential campaign, the reality remains that the United States can entirely afford to operate at a continuous trade deficit due to the high demand for debt instruments denominated in American currency that generates capital inflows capable to subsidizing a deficit. But that does not mean a deep structural issue does not exist – The strength of the American dollar poses a threat to all, from domestic consumers and foreign governments. As the worlds buffers for mitigating financial downturns shrink, the likelihood of the next recession being felt more severely by consumers increases – and whichever one of currency A or B you use, you can take that to the bank.

A Subjectively Better World

The inequality fostered in modern markets is well documented and has been denoted and analyzed ad nauseam by experts and students alike. Despite the existing flaws, the general principle has been that capitalism, adopted across the world to create global trade routes and financial systems, provides a greater overall benefit for more than engaging in protectionist tendencies that look inward to solve problems. Look no further than Brexit, which in it’s current form is estimated to cost £66bn annually and slash the GDP of the UK by 9.5% over the next 15 years. But voters clearly indicated a willingness to sacrifice national economic growth for increased sovereignty in policy and trade decisions, and to opt for economic measures that prioritize UK organizations over international competitors for funds (a row over deciding to fund the NHS over paying to access the EU market in the Brexit campaign was a primary example of this nationalist focus).

Voters have, throughout the world, categorically demonstrated a desire to act against the grander self-interest of nations – but not necessarily of themselves. Physical and financial insecurity have combined with an increasing sense of marginalization and resulted in two outcomes: a loss of faith in experts who deal in amalgamations and predictive modelling (and thereby are seen as not understanding the plight of the single family), and a desire to sacrifice growth and future prosperity to live in a more equitable state. An essay penned in Foreign Policy Magazine by Anand Menon and Camilla MacDonald recently outlined that in the case of Brexit, equity referred to the prioritization of co-nationals and tackling wealth inequality in Britain, as well as reducing household debt levels and stabilizing regional housing markets. Border security was also of chief concern, with the physical safety of individuals and families in urban areas seen as an existential-level threat.

This focus upon the strength of community and identity are values shared by another movement – the modern environmental movement. Although distinctions can be clearly drawn about individual policy decisions, both movements are guided by the notion that a better world is not a more prosperous one for the masses, but rather a more equitable one for the individual. Both movements illustrate that low unemployment figures and raucously raising interest rates provides little respite or impact for individuals facing growing health care and housing costs. Both movements speak of having had “enough” – one to a perceived or existing repression, the other to the consumption of a finite resource base – and demand collective action from the masses to instigate change.

Leaders of the environmental movement often call for “the end of growth” – a principle that the current system (of infinite growth using finite resources) exists to enrich the elites of today at the cost of the masses, that this cost directly impacts the future livelihood and security of individuals, and that communities must use their voice to protect their best interests in a system that benefits from their silence. But how heavily do the ideas underpinning the theory that growth is at it’s end align with those underpinning modern populism?

There are three foundational ideas behind why growth is unsustainable in the current climate: depletion of finite resources, environmental degradation and increasing levels of debt. In each of these three, parallels can be drawn – the notion of depleting a finite volume of resources is similar to that of sovereignty. Both rely upon messaging that current levels of growth and migration are unsustainable and, if continued, will directly result in negative impacts. Both paint grim scenes of carnage, with droughts and crime abound. And both foster a deep anti-elitist sentiment – one directed at large corporations emphasizing profitability over the health of citizens, the other at governments for letting in potentially dangerous individuals in favour of cheap labour.

The idea of a degrading environment is similar to the concept of the real cost of economic progress. Every politician or business leader touting innovation and growth is viewed as the enemy, given that they typically do not outline the tangible cost of prosperity: disruption and degradation. The costs are the jobs of individuals who no longer possess the skills deemed valuable by the market, the natural environments whose value cannot be directly calculated and are thus paved over. Often touted as the primary objective, growth too often sees externalities and hardship fall not upon spreadsheets, but on families and homes. As for increasing levels of debt, no direct parallel is needed. Crushing household and national debt is a tangible impact felt by everyone, whether it be money borrowed to make car payments or natural systems destroyed whose absence will be noted – but not within the next fiscal year.

Occasionally, politicians manage to embody both sides – left-leaning populists in the vein of American presidential candidate Bernie Sanders and Dutch politician Jesse Klaver often speak to rising inequality and a need to create a “better” world. And in there, a possibility exists that rising populism may actually signify a shift in modern societies towards more thoughtful growth and consumption wherein individuals are prioritized. If growth can be designed to be more inclusive overall, it stands to reason that less of it would be required to foster the same net results. Then maybe, at long last, someone can finally give some money to the NHS.

 

Storming about the Climate

Nowhere is the divide between members of the scientific community and general public more apparent than in an individual’s interpretation of the difference between climate and weather – the lack of distinction between compiled data and anecdotal evidence has been known to anger many a scientist, and dramatic calls of apocalyptic consequence breed despair and snorts of derision amongst the general public. Governments across the globe recognize, seeing the direct consequences and risks  of sea-level rise to seaside populations centres and degrading air quality in urban areas, that action is required.

Yet it is difficult to effectively claim the societal response to a changing climate has been appropriate – politicization surrounding exactly which party is responsible for warming has clouded a debate surrounding action in the general community, highlighting the divide between leaders and the people they lead. The majority of blame for inaction tends to be dealt to “skeptics” – individuals who claim to believe that climate science has been unable to reach a definitive conclusion regarding the extent anthropogenic GHG emissions have lead to rises in global temperature. This is the reality of progress – the purpose of scientific inquiry is to create testable hypothesis that inform us of the realities of the physical world in which we inhabit. It is a human endeavour, meaning mistakes are common. Nevertheless, the role of discovery in our society is to inform decision-making and progress – not to provide evidence in an effort to disprove personal beliefs.

An equal danger exists in climate catastrophizing. Public figures have long positioned climate change as an omnipresent existential threat capable of creating a non-habitable world within a single generation in a well-meaning effort to spur action. But it has, in large part, has had the opposite effect – climate researchers and activists commonly suffer from PTSD and depression-like symptoms. Industry members bicker whether solutions like natural gas, which has lowered carbon emissions in the United States more than all renewable energy investments combined, should even be considered given their non-carbon neutrality. Individuals feel overwhelmed by the scale and scope of issues presented, and are therefore numbed into inaction – after all, what can one person do against an entire planet?

With skeptics touting falsehoods and catastrophists accepting nothing less than 100% mitigation, the most commonly sourced reports paint a more realistic picture. The Assessment Report, a gold standard created by the United Nations Intergovernmental Panel on Climate Change (IPCC) and released in 2014, estimates approximately 3-4 degrees Celsius of warming by 2100, with a corresponding seal-level rise of 0.6 metres.

The difficulty in effectively predicting the true impacts of climate change are that, beyond the understanding that increased levels of GHGs will result in greater heat being trapped in our atmosphere, scientists can only estimate. The rate of gas accumulation in the atmosphere, the corresponding warming that volume of gases will cause, the effect of that warming upon natural and man-made systems – each of these answers can only be estimated. These factors are predicted using Relative Concentration Pathways (RCPs), projection pathways which provide four separate estimates of increases in atmospheric radiative forcing over the next century (radiative forcing refers to the amount of sunlight absorbed by the atmosphere versus reflected into space; GHGs molecules reflect high volumes of heat onto the earth, thus increasing radiative forcing and make the world warmer).

Each pathway predicts a different level of human action taken to reduce total pollutants emitted over the next 100 years, with the most and least optimistic paths seeing a divide of impacts by a factor of almost 6. The most extreme of these factors, pointing to no action taken to reduce emissions, still sees only an 0.8 metre increase in sea level rise over the next century (slightly higher than predicted), with a corresponding 12 degree increase in temperature – but by 2300.

Further, economists can now produce Integrated Assessment Models (IAMs), which attempt to convert the direct and indirect impacts of climate change into tangible costs. Such models can point to the conclusion that a 3-4 degree increase in global temperature will cost approximately 1-4% of global GDP in 2100, approximated at $20T USD in shared burden across the world (a cost, worth noting, that exceeds the entire GDP of the United States in 2016). Equally of note – this same model fails to see economic growth decrease by more than 0.05% over the next century in any single year, which is unlikely, as climate change impacts will occur gradually throughout the century and economic fluctuations are inevitable.

No economic model can effectively integrate non-tangible value, like community ties to land or the moral obligation of our species to preserve the planet – but nevertheless, the results of environmental and economic models point to the unveiling of real but manageable costs over the next century. Viewing the costs of climate change in grander context, applying appropriate time-scales to forecasting and segmenting models based on assumptions allow for the affirmation of the facts that climate change both exists and can be adapted to.

Humanity has an incredible capacity to overcome the challenges of our natural world – we have cured disease, fed the planet and created monuments to the greatest achievements of our species. There is no reason to believe we cannot adapt to a warmer world. But it does involve a transition in the way we view climate change – instead of mitigating future impacts, greater incentive should be given to adapting our current systems to inevitable transformations. The impacts of carbon already built up in Earth’s atmosphere will not be fully felt for the next 50 years, meaning irreversible damage has already been done. It is our responsibility to now ensure that we create a world well suited to our new reality.

Claims of positive feedback loops, resulting in cascading climate effects, remain unverifiable over the timeframes in which they are estimated. But in this apocalyptic vision of the future, investment into improved water management systems, more resilient and accessible energy infrastructure and public health practices will pay higher dividends still, as will spending on effective physical and digital infrastructure systems, and innovations not yet dreamed up by generations to come.

So have kids, because hey – someone is going to need to save the rest of us.

 

 

The Freedom to Speak

To speak freely is to express opinions and beliefs without censorship or restraint. This right is preserved in the United Nations Universal Declaration of Human Rights and is trumpeted in the laws of the majority of sovereign nations across the globe. However, distinctions are required: freedom of speech is not freedom to speak without opposition and it is not the freedom to speak without consequence. The right, which stands engraved in the laws of every sovereign Western state, protects one from physical harm or censorship for voicing their opinion – not from enduring the reactions their writing or speech provokes, in both relationship and professional settings.

This definition, while seemingly tedious, is necessary to establish a common base of understanding of what it means to have the write to speak freely. A line exists in the traditional discussion between a legal right and a cultural norm. It is in the realm of cultural norms that the debate emerges. The question is whether cultural censorship poses harm to reasoned civil discourse and debate. Giving a voice to the traditionally ignored has provided an exceptional benefit to society by adding voices to debate and creating a world that more equitably serves all of who live in it. But a second perspective, often an especially loud one, bemoans a loss of ability to speak with impunity on all subjects. This is often attributed to the creation of a new cultural norm of being overly politically correct.

To be politically correct means, in the literal sense, the use of language or measures to avoid offense or disadvantage to members of particular groups of society. But the term has morphed into a rallying cry of the political right and a point of contention for the political left in the Western world. The divide between students and faculty on university campuses in the West is stark: students believe they are standing in the fact of bigotry and hatred, refusing to indulge the musings of provocateurs and provide platforms to those who spread messages of division. Faculty and staff believe that this ideological isolationism is draining an entire generation of the ability to reasonably debate while creating an academic environment that does not encourage personal growth or evolution of thought.

One consistent theme woven throughout think pieces and news articles is a firm endorsement of an individual’s right to protest, another expression of freedom of speech that can be used to counter undesirable ideas. Non-violent protest has been and is used every day to amplify messages or spread an agenda, often in the face of authoritarian regimes to demand freedom or equality. Effective protest in essential to a democratic institution (hence the freedom to assembly) – but a line exists: if the freedom of an individual to speak encroaches upon the freedom of another to do the same, it itself may amount to censorship. Additionally, breaking any law during a protest still amounts to an illegal activity. If protest threatens harm, destroys property or creates an unsafe environment, it moves beyond the realm of freedom of speech and must be treated accordingly.

Any group or institution seeking to repress the freedom to speak is an illiberal one. The right to speak freely is fundamental to the rule of law. An unwillingness to debate certain ideas has created a culture wherein these ideas are rejected, but not refuted. This lack of public debate around certain subjects creates an atmosphere of tension whenever they arise and fails to appropriately deconstruct them to the degree where society at large can take informed positions. If the only discussion heard surrounding a social or cultural issue stems from a single ideological view, it is impossible to truly claim to understand all sides of any issue.

A second, more subtle, trend has emerged: a desire to label ideas and the individuals or groups who espouse them. When an idea is deemed to be hateful (racist/sexist/misogynistic/homophobic etc.), a tendency exists to label the individual voicing these ideas to be equally loathsome. This is dangerous. Conflating ideas with character is both isolating for those accused and often ignores the reality that hatred stems from ignorance, which can be cured. If we as a society continue to treat certain ideas as worthy of rejection of a person, three things will occur: these ideas will never be said out loud for fear of being rejected. Individuals will have a greater fear of being labelled as hateful than of holding biased views, for fear of being spurned. And individuals who hold these views will feel rejected from society at large.

Creating environments that do not allow for open discussion has a sinister effect of discouraging people from asking hard questions for fear of being pushed away. Only when discussion can be open, only when ideas are debated in public can opinions be changed and society truly move forward. The alternative? We begin to associate more with ideologues who have a perceived freedom to speak openly. We conflate our desire to voice our views without fear of being rejected with having extreme views. And we fail to listen to those who we felt never listened to us.

“You are not entitled to your opinion. You are entitled to your informed opinion. No one is entitled to be ignorant.”

“I want you to be offended every single day. I want you to be deeply aggrieved and offended and upset, and then to learn to speak back. Because that is what we need from you.”

 

 

 

Greening the Grass

The basis of trade, an exchange of goods and services, has existed for millennia across almost all variants of modern humanity. Moving beyond a literal exchange of goods for food became necessary when economies grew in complexity and specialization to the degree where a medium of exchange with a commonly understood value was needed (likely around the same time someone took a job other than farming). Thus money was invented to fill an obvious need. As economies continued to grow in scope and money rose in popularity, the capacity to amass wealth and wield the power it provided saw the development of institutions and systems that would protect and manage the concept of value. Early banking systems focused on trading grains to merchants and farmers, a far cry from the current practice. Grain is still traded – but in futures contracts on commodities markets, where the price of goods is determined through a combination of automated trading and turbocharged bankers.

These improvements have lead to greater efficiency and an almost universal improvement in quality of life across the planet. Economic and financial systems have arisen and connected across the globe, resulting in the development of a world more interconnected than ever before. Closer union has proven a deterrence to war, an engine of opportunity and a driver of prosperity than has taken entire nations from the brink of collapse. Like any market, there are winners and losers – but none can attest that the world would not be a safer, more accessible place were it not the creation and integration of the global economy.

But market losers have faces and families. Too easily dismissed, we forget that the men and women residing in the nations facing recession and collapse do not share the same views of the system. For them, it is one that crushes freedom by conflating wealth and power, resulting in a loss of political and economic power to those being oppressed. For them, it is a system ruled by multinational corporations who destroy entire regions to enrich faceless shareholders. For them, it is a system that fosters inequality, pushing those who are no longer deemed valuable into a state of perpetual instability, where they cannot find employment or take advantage of any opportunity that may exist. For them, this is a system that disadvantages and oppresses, where bureaucrats and elites profit from their misery and they are left to starve.

Capitalism has winners and losers. But thought has gone into developing reform strategies to make the game more equitable. The popularization of Inclusive Economic Growth over the years proves this – defined by the OECD as “economic growth that creates opportunity for all segments of the population and distributes the dividends of increased prosperity, both in monetary and non-monetary terms, fairly across society”, it is both quantifiable and readily measured. Through a series of key performance indicators (KPI), economies can be benchmarked and compared.

Three key sections – growth and development, inclusion and intergenerational equity and sustainability – provide the framework for determining what it entails to be an inclusive economy. Through quantification and measurement of performance, the World Economic Forum has developed an  annual Inclusive Development Index that captures data from 109 nations around the globe and compares to averages of the last five years. Separated into developed and developing nations, the divide is distinct.

8 of the top 10 most inclusive nations in the world are European, with each Scandanavian state found in the top 6. Amongst developing nations, Lithuania was ranked highest, flanked by Azerbaijan and Hungary. Great Britain and the United States were ranked 21st and 24th respectively.

But does focusing on inclusive economic growth go beyond political speech and make a discernable impact on economic growth and poverty levels? Overall, GDP growth rates are above average amongst highly ranked nations, with a notable trend towards liberal economic policies showing more significant results. Some benefit strongly from wealth in key industries – oil and gas, banking, metal fabrication – whereas others benefit from government stability. A notable trend is that nations above are traditionally not viewed as active players in global affairs, and have relatively mono-ethnic domestic populations. Whether these factors play a substantial role is indeterminate, but it is likely not a coincidence.

In the case of the United States,  boosting the inclusivity of growth within the nation was one of the few issues both candidates agreed upon in the 2016 Presidential election. The most commonly cited statistic outlining wealth distribution within the nation’s borders is the percentage of overall wealth going to the top 1% of earners, but it does not paint a complete enough picture. By the mid 1980’s, income inequality saw a 5:1 ratio of real earnings from rich to poor. For the following 15 years, the incomes of poor households increased 0.06% annually before falling again in the early millennium. Throughout this time, rich incomes steadily increased year-on-year by 0.83% annually. Through these times, technological automation and globalization likely played a role, but do not paint the entire picture – both of these factors would have had uniform cross-border impacts and fail to explain why inequality grew at a faster rate in the United States than in both Great Britain and France over the same time period.

If inclusive growth is measured by ensuring inclusion in the workforce, developing solutions that are built to advantage individuals long-term and creating an environment that prioritizes opportunity for people of all ages, policies must be designed with people in mind. Easing the regulatory burden is important – but regulations exist to protect those who cannot otherwise protect themselves, or to create rules around use of common resources. And the ugly truth is that we may trade the grain today, but we are still responsible for ensuring that these is enough for everyone to go around tomorrow.

 

 

 

 

Mightier than the Sword

The modern blitzkrieg revolutionized warfare by creating a way to overwhelm enemy forces with opposition, forcing them into submission in unprecedentedly short periods of time. The term has evolved over time to mean launching an extreme military campaign designed to create a short-term victory. As war evolved beyond soldiers and planes, so did tactics – but rudimentary strategies have remained. And the blitzkrieg has re-emerged in recent years in a form of military operations requiring dexterity not in the trigger finger, but in the typing of thumbs on a touchscreen keyboard.

Information warfare, enabled by advances in, and access to, technological infrastructure, has been defined as “conflict or struggle between two or more groups in an information environment”. This definition is enormously vague, and seemingly extends from direct hacking to celebrities passive aggressively subtweeting insults about their former partners to their new ones. In reality, the spectrum encapsulates all forms (though to qualify as warfare, the scale must be stretched from the actions of a Kardashian to those of a national government). Billions have been invested by governments across the globe to install precautionary measures to prevent direct cyber attacks, the likes of which become more common by the day. From China and Russia’s frequent attacks upon American intelligence infrastructure to the North Korean government attacking a movie studio and leaking thousands of email correspondences, the general public has become familiar with how this form of warfare looks.

But there is another type of information warfare that has been used, to mixed results, across the globe: Propaganda and disinformation campaigns. Stretching through traditional and new-media channels, well-funded attackers have the capacity to plant ideas, sow dissent and generate alternate realities. A recent article from the Atlantic cited an example where the Russian government renamed a large region of Southern Ukraine to Novorssiya, creating maps and flags, writing the name into history textbooks and creating dedicated news networks and twitter feeds. While seeming comical, this was rather sinister – Russia’s actions in the Ukraine are creating an environment where regions that have been recently annexed are now being forcefully integrated, both physically and culturally, into Russian history. Within the next decade, the very real possibility exists that citizens born in the Ukraine may be programmed into believing they are in fact Russian, and that the Russian invasion and occupation of their homeland was not an illegitimate act.

This seems a ludicrous example – one cannot simply start renaming countries and territories wherever they please. But it does serve to illustrate the power of disinformation campaigns. If an idea is introduced and seems valid, many will consider it. If other ideas are actively suppressed or delegitimized that oppose it, individuals who hold certain beliefs will begin to assume that their ideas are valid, widely held and may develop a sense that a consensus exists when, in reality, there is none. Psychologists characterize this as the False-Consensus Effect, wherein one is lead to believe that the opinion of their community is in line with the collective opinions of society at large. Common in the adoption of conspiracy theories, this effect has seen misinformation campaigns in both developed and developing nations launch and gain powerful figureheads as advocates, thereby bringing their views or theories firmly into the mainstream.

In the example of Russia’s invasion of the Ukraine, state-funded media and propaganda arms painted protestors as neo-Nazis actively seeking to undermine the national interest. The first step of Russia claiming any territory is to broadcast their media programs to the Ukrainian public. Putin himself speaks of Novorssiya as if large regions of the Ukraine were being rightfully repatriated and public support was for Russian occupation – in reality, Russian forces hold only a small piece of land in the country and are decisively not welcome by both the domestic and international communities.

Russia’s campaign extends beyond the Ukraine – intelligence reports have shown Russian-funded media blitzes supporting anti-EU candidates across Europe, demonizing democratic protestors in Middle-Eastern nations and interfering in foreign elections by launching disinformation campaigns and funding certain political parties while actively attacking others. Nor are they the only party known to engage in information warfare – China and North Korea have each been accused by multiple nations of interfering in elections through suppression of information or direct support of certain parties, thereby influencing the governments and futures of fellow sovereign states.

How can misinformation be countered? Going blow for blow has been shown to be ineffective – but pushing increased funding and support towards media institutions, whose responsibility is to report the truth, remains a viable alternative. But the responsibility then falls with the media itself to not simply engage these ideas in discussion, but to actively fight for the truth. Both traditional and new media forums have been careless in this, providing a platform for all to equally espouse their views without creating the corresponding accountability for all parties. A set of common standards must exist and facts must be prioritized – lest we be overwhelmed with information to the point where we are unable to divorce fact from falsehood.

Calculating when misinformation is present remains the role of the media. Not all information is equal. Presenting the facts and allowing individuals to make informed opinions is necessary to the survival of liberal democracy. It’s time those who hold and make pens are held to the same account as those who brandish swords.

 

 

Labouring for Prosperity

The simplest measure of privilege is a quick calculation regarding the number of impediments or barriers one may encounter in accomplishing an objective. There are two core types of impediments: personal, which include location and physical ability, and systemic, which include access to educational opportunities and the existence of regulations that would obstruct this progress. A concrete example is labour mobility – the factors preventing one from being employed in a certain region or industry can be sorted into personal and systemic factors. The need for legal residence, an obligation to pay a minimum wage and discrimination based on age or race are all concrete examples of factors that can impede an individual’s ability to find employment.

Many of these factors – the right to unionise, the creation of a minimum wage, workplace safety standards – were developed to protect workers and their rights. As policies have been implemented and technology has advanced, markets have adjusted towards meeting these once-questioned standards. The prevailing theory surrounding globalization was that, over time, global markets would adjust to the migration of labour so that workers within the same industries across the globe would be paid the same wages. Countless examples across all sectors show that this has not proven to be the case.

But what is the impact of labour mobility in domestic economies? Within the EU, foreign workers (non-nationals) make up over 14% of the workforce with over 1.6M individuals, with an additional 1M EU28 citizens moving to another Eu28 state in search of employment. Of these workers, 56% were younger than the average age of nationals in the nation they were moving to, and 44% possessed formal post-secondary education. In the United States, over 3% of individuals have relocated to another state for work (constituting almost 10M people). If labour mobility serves as an indicator of the markets ability to adjust structurally to meet demand, therein raising overall productivity by better allocating resources, the two examples above stand as pillars. Or so goes the thought.

These is attributable to 2 key reasons: unification of occupational regulations and licensing, lowering the barriers to mobility between nations or states, and an instilled cultural norm of economic migrancy. While reduced wage flexibility plays a role, so does age – 54% of young people who had previously moved away from home were likely to move again for employment within the next 5 years.

One factor looms overhead: the role of unskilled immigrants in shaping the labour market within a developed economy. In this, the examples of the EU and the United States can serve as case studies: 13.3% of the population of the United States are legally qualified as “foreign-born”, and over 1.4M immigrants and asylum seekers entered the EU annually – within these sub-populations, the majority have been found to be unskilled labourers, with the barriers to entry for licensing often proving insurmountable for impoverished foreign agents.

An example: the United States faces a shortage of primary care physicians, with the problem expected to worsen as the population ages. Thousands of foreign-born doctors currently reside in the United States who do not practice medicine, having obtained their medical license and education abroad. In order to practice medicine with a license in the United States, a doctor must pass board exams, an English language test and complete a residency program. Here the systemic impediments emerge – among doctors who attending medical school in the United States, over 95% are accepted into residency programs at American hospitals. For doctors who have trained abroad, these acceptance rates drop to below 40% of applicants.

Justifications abound for this case – medical schools are taxpayer subsidized, the differentiation between international standards, familiarity with the American medical industry. But none of these factors change the clear fact that there are too few primary care physicians in the United States to take care of the ageing population. Supply clearly outweighs the demand, which exists in the form of trained doctors  who find the impediments to practicing medicine in the United States pushing over 60% of individuals into various states of underemployment.

Facilitating the mobility of labour and lowering the barriers to entry for skilled workers is a crucial policy example where reducing regulatory hurdles and tackling systemic bias go hand-in-hand; a perfect pairing for wherever you may fit on the political spectrum. Systems should be put in place to ensure that the growing skills gaps is met in coming years – creating systems that better utilize skilled labour is both an easy and impactful fix. Maybe then we can recognize that giving someone a good job is not really a privilege at all.

Tomorrows Jobs – Today

Much ado has been made about the loss of blue collar manufacturing jobs in Middle America, highlighting the supply and demand gap that exists in developed nations for modern blue collar jobs. Jobs are somehow disappearing from within the borders of one nation and being unceremoniously supplanted into another, with hungry corporations eager to take advantage of supply chains that utilize cheap labour and lax environmental regulations. This narrative presupposes that when the jobs leave, they do not return. But, current population statistics show the total number of jobs in the United States has increased by over 10M from 2011 to today. So where are all these jobs no one seems to have?

Automation and technological advancement within not just the manufacturing industry, but across the economy as a whole, has created jobs for using technologies that require greater levels of training and specialization. No longer can graphic designers be able to draw – now they must be able to code in HTML5 and JavaScript. New positions constantly emerge as technologies evolve, with mobile applications and soon wearable technologies demanding entirely new modes of interfacing with customers. A graphic eye and artistic sense must now be paired with front-end web and app development skills to be in demand.

This is commonly known as “the skills gap”, wherein 39% of employers last year reported having difficulty filling positions due to a lack of available talent. This is not a post-2008 phenomenon – employers have long reported difficulties in finding qualified workers to fill positions.

Educational institutions have fared no better in training people to plug the gap; Post-secondary learning institutions themselves have difficulty keeping up with technological advancement and employer demand. This lack of training for the real world is evidenced by the experience of the average university graduate – In the United States, 45% of all workers with Bachelor’s degrees are underemployed, meaning they do not work in fields that require the active use of their education or degree. Educational institutions often fail to imprint the current technical skills needed to enter most industries, leaving graduates unprepared for the realities of the job market and facing increasing employer demands to even be considered. In the example of the graphic designer, art schools still teach much of their curriculums in a print-based format, with the primary technological training being website development – hardly appropriate training for the demands of the workforce.

Equally inappropriate is asking every unemployed factory worker to return to school and earn a four year degree simply to get another job. Many possess neither the financial means, nor the desire, to return to academia. But a report released by the Harvard Business Review showed that increased wage growth has risen sharply for industries with the swiftest technological advancement (the average wage in health care is 49% higher than it was before the introduction of the workplace computer, compared with the measly 2% increase in wage growth in manufacturing). So what’s the solution to filling the existing skills gap with underemployed workers?

As knowledge is increasingly commoditized, a demand for workers to continuously learn through their careers (becoming lifelong learners) has emerged. The model of compressing education into the earlier, more formative years of an individuals life and reaping rewards later is not enough when the most-sought after skills on the market change every 5-10 years. Extra training is now understood by 54% of adults in the labor force as a necessity in working life. Newer forms of education, designed to be less time-intensive and more skills-based, have shown significant promise.

8 to 12 week “boot camp” courses offering to teach students the skills needed to code in multiple computer languages see trainees scowl at JavaScript over coffee-fuelled 11 hour days. These courses charge premium prices and boast average graduate employment rates of over 80%. The popularity of Massive Open Online Courses (MOOCs) has also skyrocketed, with web platforms promoting education aimed at open access and unlimited participation teaching the over 7 million individuals seeking to learn. Employers now typically offer to subsidize greater education, understanding that the value of human capital within the organization increases when investment into their training and education is made.

But does all this translate to unskilled workers? 80% of users of MOOCs have previous university degrees, and unskilled workers may simply lack the desire to stare at a screen and be lectured at. But opportunities to help those with the desire to learn and not the means exist – trade unions and governments can introduce funding programmes to help provide the financial capacity to enter academia for those who feel it to be valuable. But for those who do not wish to change, the future seems less bright. And a future with heightened inequality and division between sectors may emerge as knowledge is increasingly viewed as a tradable commodity in the domestic and international economy.

Here lies the responsibility of government. He who promises to forget no one must then think not to improve the immediacy of life, but to develop systems that sustain this promised success. Investing into education both in current and alternative forms is a crucial step in creating norms that ensure the only thing lost in an increasingly automated economy is not our jobs, but our fear of progress. And for training for the jobs of tomorrow – what better time to start than today?

 

A Leading Question

When searching for the qualities that make any one individual a leader, a recommendation that any list or service found on the internet promising to teach “the secret all great leaders live by” be avoided is likely to save you money and memory for more useful things. Often, these lists are extrapolated based on the qualities shown by figures in business – Jack Welch, Steve Jobs, and Bill Gates are models drawn upon so often their names have become synonymous with success in business ventures of any kind. The frequency with which their names are thrown about would indicate that determining what would breed success could be as simple as understanding what traits were shared by these people, then claiming these are the characteristics of effective leaders or managers.

But what actually makes a good leader? Lines are drawn between public and private sector leadership, effectiveness and efficiency, leadership versus management, and becoming a symbol of success or simply succeeding. Examples of poor leadership or management run abound in every business and industry type across the globe – traits like an inability to cede control, poor listening skills and a simple failure to produce are understood to be poor qualities by those being lead by these managers at any given time. But beyond certain universal traits, objective evaluations of the quality of leadership are overly analytical and theoretical while being subject to individual or organizational bias (thereby eliminating the value of objectivity entirely).

The first question: Are different traits required to be an effective public or private sector leader? Of course -the differentiation between a mission of profit generation versus serving the public interest itself requires a different approach to problem solving. Goal ambiguity, the degree of bureaucracy and consistency in strategic direction are factors that influence management style. Additionally, the measures of success differ. Profitability, while occasionally overly efficiency-focused, at least outlines a tangible objective. Working on a shifting political mandate on a taxpayer budget makes the idea of a concrete long-term objective slightly more ambiguous. Understanding this, certain character traits are more visible in each industry: a need for achievement and affiliation is stronger in the private sector, with a key differentiator in styles being “a desire to be unique” commonly held amongst public sector managers, and “a desire to have an impact” held more deeply in the private sector.

In the case of leadership vs management, idioms range: leaders innovate, managers administrate. Leaders develop, managers maintain. But being a manager does not preclude one from holding leadership traits – in fact, in the example above of Steve Jobs and Bill Gates, neither were famed for their interpersonal skills, a fundamental weapon in a manager’s arsenal. To lead is to direct an organization, but to manage is to encourage the people who actually work within the organization to achieve those lofty expectations. In this case, a strong leader may be an ineffective manager if they lack listening skills and empathy for their fellow man.

As to becoming a symbol of success, this falls into the lap of the question “what is the difference between a great leader and an icon?”. The answer is undoubtedly circumstance. In the public and private sector, examples abound of individuals who saw opportunity and exploited it. A question exists of whether the Churchills, the Dr. Kings, the Fords of the world would have been capable of rising to the heights they did in a different era. And the answer to that is both unknown and not relevant – they are who they are because of their skillset that was both effective and sought after at that time. And to become iconic is not necessarily a reflection of effectiveness – Richard Branson and Donald Trump are neither the richest, nor the most successful businessmen to ever live. But their names are iconic in ways that Amancia Ortega and Larry Ellison’s simply aren’t. Effective leadership can not be attributed to successful branding, otherwise becoming a true icon would simply involve having an enormously engaged following on social media.

The definition of leadership has evolved over time to be one less focused upon motivation and more upon engagement. In other words, leaders have to learn to manage those they work for in a deeper fashion. In a world where empathy reigns supreme, Welch’s growth-focused style and Jobs meticulous detail-orientation would actually be viewed as detractions that may have dramatically impeded them had circumstance been different. But shush – no one tell that to the lists on the internet.

 

Blog at WordPress.com.

Up ↑