Last time, I posed the following questions:
1) What are the hidden black swans associated with relegating all food production to robotics?
2) Is food production important enough that we should subsidize a subset of farmers to continue to farm so that we maintain the knowledge base?
3) Is the constant move toward the lowest cost denominator always in the best interest of society at large?
4) Should we take a deep dive into the past and uncover the progress traps that we fell into and determine if we are in the midst of making similar one way decisions in almost every human domain in this era of disruption?
Again, I turn to the field of risk analysis for answers. I know, I know – most people’s eyes glaze over at the mention of risk analysis, but I feel it’s an important area of study because it ensures that we are thinking about the future in the right way. The future is not set in stone; it can veer off in many different paths, and risk analysis can act as one of the rudders that guides the collective ship we’re all on.
The black swan concept is one of the most effective ways to look at future risk, so let’s use it here. I think that mass robotization is inevitable, so it’s not something that we really can stop. I also think there are certain benefits if we can keep it under control. (By the way, if you’re new to robotization, I recommend you read this blog on AI). That being said, I feel that relegating all human activities (labour, trades, professions, and basic services) to mass-produced robots developed by a few corporation can potentially “fragilize” future generations, and whenever a system is fragile it is susceptible to damage or collapse when conditions become volatile. Moving to an AI future will be the latest iteration of the division of labour, increasing specialization in some fields and eliminating others entirely, and here are some potential risks I can see as a result of this radical shift in society:
1) Humans will forget how to grow food, livestock, and tend the land.
2) A handful of corporations will own all crucial genetic and machine technologies, making the cost of food production so cheap that it dissuades or makes it completely impossible for new entrants.
3) Robotically grown food will require functional electronics and will open the food system up to vulnerabilities like EMP attacks from foreign governments and solar flare events.
4) The lack of humans living in rural areas due to a robotic workforce will eliminate feedback loops, leading to the easing off or even elimination of pesticide, fertilizer, and herbicide application limits.
Now, this shortlist is just off the top of my head; there are no guarantees that any of these things will happen. Risk analysis is not necessarily about predicting the future. It is a premeditated examination of possible outcomes to facilitate decision making. If we don’t engage in this type of analysis, we may end up veering towards a future that we do not want to be.
We Need to Talk About The Future
So here’s what I’m arguing for.
I want a conversation about what we DO want the future to be. What human jobs, tasks, professions, and trades we should strive to preserve. How AI, robotics, demonetization and “democratization” can and will affect us and future generations.
I want to have these conversations on society at large. And within our communities. And with our families. As my children grow, I know that having contingencies to an automated future will be important in their lives and the lives of their children. A healthy dose of critical thinking and skepticism will be essential going forward, especially when human life as we know it may become unrecognizable within a few decades.
I know that I’m not alone in this line of thought. Through our work at Adaptive Habitat, many of our clients are explicitly asking us to think long and hard about these problems. This post, ultimately, is an invitation for you all to engage in this conversation I’m already having in private. Judging from the many thoughtful comments already from the previous post, this is a topic ripe for dialogue.
So what do you think?