Statistics are a lot of fun, and contentious too. Yet they permit us to remain fairly grounded. Let’s start with the obvious. From self-driving vehicles and semi-autonomous robots to intelligent algorithms and predictive analytical tools, machines are increasingly capable of performing a wide range of jobs that have long been human domains. A 2013 study by researchers at Oxford University posited that as many as 47% of all jobs in the United States are at risk of “computerisation”. And many respondents in a recent Pew Research Center canvassing of technology experts predicted that advances in robotics and computing applications will result in a net displacement of jobs over the coming decades – with potentially profound implications for both workers and society as a whole. It is expected that robots will undertake more than 50% of jobs currently being undertaken by humans.
At the current moment, there is palpable positivity that the nature and type of jobs currently undertaken by humans will not completely vanish before 2050. Optimism notwithstanding, the very changes one is seeing today to jobs, particularly at the bottom end of complexity, do suggest that fundamental shifts are in the offing. I am inclined to take a peek not just into the near-term future where automation, algorithms and machine learning technologies are being viewed as emergent threats to traditionalist views of jobs, but also into the longer-term future where the very definition of jobs is seen as obsolete.
First, let’s address near-term trends, by considering algorithms. They are seen as instructions for solving a problem, and much like maths. Computers and smartphones are fundamentally algorithms, which rule our daily lives – with social media, mobile apps, GPS systems, trackers, wearables, computers, financial transactions, travel et al. We cannot do without them today. Interestingly the most underrated component with algorithms is the misunderstanding that algorithms earlier had functional specificities with which they undertook tasks efficiently. Today algorithms are on a self-learning mode, thereby giving us a window into a more interesting future where machine to machine (M2M) interactions may soon supersede human to machine transfer of knowledge. At this point it is perhaps crucial to acknowledge one example. Earlier in January 2017, Google Translate developed its own intermediate language so as to enhance its ability to provide translation services. The most amazing thing about this development is that the algorithms governing it not only learned from millions of users the ability to translate meanings, but have now amassed enough information/ experience to build a “context-based translation language” to provide accurate contextual translation. The possibilities such self-learning tools offer are phenomenal, yet scary at times.
I hear of many arguments from leaders around the world that the recent trends with automation, robotics and attendant technologies are just that: technological interventions that will not fundamentally alter needs or drivers to businesses, but remain subsidiary in the context of value delivery. At the moment, with the discrete nature of such interventions (both from a standpoint of development, and their adoption by organisations) I am inclined to agree. However, a thorough consideration of the implications (and preparedness required therefore going into the future) necessitates rigorous discussions from a standpoint of applicability and ability to deal with the shake-ups that will necessarily follow.
Service automation technologies (incorrectly labeled as RPA) have already begun to take over transactional jobs around the world. Jobs in office and administration, construction and extraction, arts, design, entertainment, media, legal, installation and maintenance et al are slated to vanish, while “cognitive” jobs in management, engineering, design, computing, architecture, sales, marketing, education et al are expected to grow. Limiting ourselves to the APAC region as an example, by 2020, it is expected that 86.5 million people will be fully unemployed, while approximately 893 million people would be in “vulnerable employment”, meaning such jobs are prime targets for replacement by a non-biological workforce. Given that median ages in the region are skewed toward youth, with consistently positive birth rates, envisioning a scenario into the next two decades is quite scary. Will adoption of such technologies therefore slow down, or be rejected altogether? The economics of push-back on such developments can be quite disastrous – exports would become low-value, imports would become expensive, lifestyles may stagnate, incomes surely would dwindle, purchasing power would be drastically reduced, and consequent political fallouts may be severe, if not disastrous. While I am not advocating aggressive adoption of such technologies just to be “in”, a complete rejection has irreversible implications in the context of growth and development. I do see that policy-makers will have to remain cognisant of the complexities surrounding modernisation and job creation (that at the moment seem to be in opposition to each other).
The scenario within corporations of course is another interesting story. Bereft of the need to think about socio-economics, their strategies are governed more by competitiveness, growth and shareholder value. Adoption of transactional automation technologies is already being seen as an unavoidable imperative. Most corporations have developed or are rigorously developing strategies where within the medium term their organisations would reflect a workforce constituting both biological and non-biological employees. This transition requires not just an innate understanding of the organisation’s levers where technologies could be effectively leveraged, but also a clear view to the future that is driven by goals other than technological modernity. Meanwhile, the inevitable conflict will come to be, pitting organisations against governments. I believe that one cannot ignore this reality. A good friend recently wondered whether it is time for governments to start taxing a non-biological workforce as well. But that is a different conversation for another day.
A Hyper-Intelligent Tomorrow
At the other end of the spectrum we are witness to some great strides being made in various fields, some exciting and some conflicting: gene editing technologies to reduce instances of heart attacks, leveraging skin from fish to treat severe burn injuries, stem cell regeneration from all types of body cells, gene splicing to increase crop resistance to diseases and enhanced yield, GMO crops to eliminate hunger altogether, credit rating based on needs and not incomes, 3D and 4D printing, gesture-based computing, womb-based therapy, virtual health trackers and real-time treatments, water labeling of products, industrial repatriation and many more. Considerations for such technologies can be driven either by an inherent “visionary” need, for e.g. moon mining, or beaming internet from space, or by envisioning a larger future where “homo sapiens superiority” is pushing the gauntlet on innovative technologies and their applications.
There are a range of futurists in the world predicting the future of work in many ways, with most of them today focused on identifying “jobs that would be created in future” and comparing them to “jobs that will be lost”. Such predictions may seem overwhelming for governments and organisations who are collectively scampering to remain on top of all these trends. It reminds me of the same continued battles these very entities have engaged in for the past three decades, trying their best to identify near-term trends, and overhauling their environments (however sporadically) to remain in control. From establishing communication networks, to building ecosystems for adoption and deployment of technologies in the mobility space, eCommerce, from cloud computing to big data and IoT, almost all endeavours the world over have reflected a pursuit of technology with scant regard for their applicability and impact on socio-cultural constructs of their ecosystems. The resultant supply clutter and value erosion (or non-creation) is palpable yet difficult to put a handle on. Complexity continues to increase, while our endeavours seem to fall short.
Meanwhile, a very few individuals and entities have turned the question upside down. Instead of asking what jobs are needed in future, their question is: to what end do we ready ourselves? Two institutions have taken this discussion to a very poignant level, while refraining from offering solutions. I personally agree with the approach that we need to put human context into all our endeavors and predicaments. The Institute for the Future (IFTF) has identified six drivers for change that will in turn influence all developments, as well as adoption or rejection going into the future. These trends are human in nature, and have a direct influence on our collective decisions going into the future. I am not describing these here as they are beyond the remit of this article.
I firmly believe it is crucial that policy-makers and industry, along with civil society, begin to engage in these inherently complex questions to understand implications, and then determine steps that are complimentary of their collective needs, and opportunities that could consequently be created. In the interim, I believe that almost all our efforts at transforming our workforce, and reinvigorating economic pursuits with new skill development initiatives will remain necessary but severely insufficient. Trends like localism, contextual deficit, urbanisation, increasing rage, increased longevity, cultural intimacy, digital narcissism, sense of entitlement and many more are critical to appreciate rather than just admiring technologies and rushing to adopt them.
Do We Need Jobs?
Humans over the past two centuries have built a successful societal model that rested on the back of two broad entities: corporations and governments. The former created opportunities to leverage individual capabilities and compensated them in exchange for work; the latter created ecosystems for the former to operate within, while seeking taxes from the former to support the latter. Over time taxes have morphed into a veritably complex set of initiatives the world over. With globalisation, some rationalisation of rules permitted cross-border trade and exchange. Yet, a significant proportion of the global population is getting increasingly disconnected with this economic reality. Perhaps the time has come to ask some fundamental questions. I have always wondered this: why do we need jobs? Can we do without them? Perhaps in such categorisation of jobs we have built an inequitable system that can never change. Hence all initiatives aimed at inclusion, equality et al will always remain pipe dreams. What if we were to upturn this basic premise?
Recently, Switzerland, closely followed by Finland, initiated an exercise wherein they asked their citizens if they would prefer to have a Universal Basic Income (UBI) that would eliminate all people from the tyranny of empire (hierarchy, organisations, jobs, positions, structures etc.) I would have imagined that the very concept of UBI does present a unique opportunity for humanity to liberate itself from the clutches of its own pursuits. Yet, citizens of both nations rejected the notion of UBI. Does it mean that inherently man believes his effort should necessarily be construed to be in comparison to another, and therefore rewarded commensurately? Or are there other reasons? Interestingly both nations rejected the proposal more on the lines of its application and source of funding, rather than in terms of the manner in which it was defined. Of course, one cannot ignore UBI being compared with existing social security coverage in many nations, and the attendant disappointment many have with such social systems. Meanwhile, India is toying with the same idea of establishing a UBI.
I am inclined to think that there is significant opportunity to restructure and redirect existing subsidies into the UBI model, thereby freeing up resources to invest in growth and new opportunities. It is easier said than done. I think the discussions are complex, yet in context of incomes and sustenance, UBI presents an opportunity that may essentially eliminate categorisation of individuals – through jobs – into various roles, and transform them into a conglomerate, where each individual is a producer, consumer, supplier, aggregator and enabler rolled into one.
Of course this presents another interesting question. Should we be able to eliminate jobs altogether, and have machines do them for us, who then remains in control? We as a species lost the battle to control nuclear technologies in 1945. Lessons from it influenced a 1975 conference amongst scientists who had discovered DNA, resulting in an agreement not to recombine DNA from different species and possibly lose control. That agreement holds even today. In January this year a similar agreement was reached by technologists and global leaders, wherein 23 principles for AI have been established. Again, the goals around non-circumvention, retention of human control and other aspects govern this agreement. Would we as a species adhere to such an agreement as we did with DNA, in a time and age where animosity amongst and between nations is at its highest?
Nevertheless, what remains true is that machines have begun to learn from each other, and are slowly making humans redundant. Google Brain’s recent trysts with neural networks are a scary example of what possibly lies in store for us. We may have to make some radical choices in the next decade around policy and technology.
Discussions about conversational AI are ubiquitous these days and virtual or cognitive agents, such as chatbots and the like, are at the forefront. With the mission to understand how these technologies impact services, what they...