The Hidden Risks of Autonomous Farming, Part 2

 In Farming, Featured, Food, Verge Permaculture, Why Permaculture?

Last time, I posed the following questions:

1) What are the hidden black swans associated with relegating all food production to robotics?

2) Is food production important enough that we should subsidize a subset of farmers to continue to farm so that we maintain the knowledge base?

3) Is the constant move toward the lowest cost denominator always in the best interest of society at large?

4) Should we take a deep dive into the past and uncover the progress traps that we fell into and determine if we are in the midst of making similar one way decisions in almost every human domain in this era of disruption?

Again, I turn to the field of risk analysis for answers. I know, I know – most people’s eyes glaze over at the mention of risk analysis, but I feel it’s an important area of study because it ensures that we are thinking about the future in the right way. The future is not set in stone; it can veer off in many different paths, and risk analysis can act as one of the rudders that guides the collective ship we’re all on.

The black swan concept is one of the most effective ways to look at future risk, so let’s use it here. I think that mass robotization is inevitable, so it’s not something that we really can stop. I also think there are certain benefits if we can keep it under control. (By the way, if you’re new to robotization, I recommend you read this blog on AI). That being said, I feel that relegating all human activities (labour, trades, professions, and basic services) to  mass-produced robots developed by a few corporation can potentially “fragilize” future generations, and whenever a system is fragile it is susceptible to damage or collapse when conditions become volatile. Moving to an AI future will be the latest iteration of the division of labour, increasing specialization in some fields and eliminating others entirely, and here are some potential risks I can see as a result of this radical shift in society:

1) Humans will forget how to grow food, livestock, and tend the land.

2) A handful of corporations will own all crucial genetic and machine technologies, making the cost of food production so cheap that it dissuades or makes it completely impossible for new entrants.

3) Robotically grown food will require functional electronics and will open the food system up to vulnerabilities like EMP attacks from foreign governments and solar flare events.

4) The lack of humans living in rural areas due to a robotic workforce will eliminate feedback loops, leading to the easing off or even elimination of pesticide, fertilizer, and herbicide application limits.


Now, this shortlist is just off the top of my head; there are no guarantees that any of these things will happen. Risk analysis is not necessarily about predicting the future. It is a premeditated examination of possible outcomes to facilitate decision making. If we don’t engage in this type of analysis, we may end up veering towards a future that we do not want to be.

We Need to Talk About The Future

So here’s what I’m arguing for.

I want a conversation about what we DO want the future to be. What human jobs, tasks, professions, and trades we should strive to preserve. How AI, robotics, demonetization and “democratization” can and will affect us and future generations.

I want to have these conversations on society at large. And within our communities. And with our families. As my children grow, I know that having contingencies to an automated future will be important in their lives and the lives of their children. A healthy dose of critical thinking and skepticism will be essential going forward, especially when human life as we know it may become unrecognizable within a few decades.

I know that I’m not alone in this line of thought. Through our work at Adaptive Habitat, many of our clients are explicitly asking us to think long and hard about these problems. This post, ultimately, is an invitation for you all to engage in this conversation I’m already having in private. Judging from the many thoughtful comments already from the previous post, this is a topic ripe for dialogue.

So what do you think?

Recent Posts
Showing 4 comments
  • jjhopper

    scary stuff hey. I find your analysis of potential risks compelling and feel that the trajectory that you described is probable. I checked out the AI article – it is really interesting. I tend to approach risk analysis from an emotional, spiritual realm as those tend to be my areas of strength for relating to the world around me. One quote that jumped out at me was in the beginning of the article where the author was discussing the law of accelerating returns. He noted that “someone from a purely hunter-gatherer world—from a time when humans were, more or less, just another animal species.” I am pulling this quote out of context because It contains within it a concept that I have been pondering. We humans are indeed an animal species. In your outsourcing article I was inspired to write about the fact that through the process of outsourcing we have distracted ourselves and outsourced ourselves to the point of forgetting that we are part of the animal kingdom, if you will. We are very unique animals in the same way that every animal species is unique.

    For me, the solution that I am attempting to bring about in my life is to honor this truth and see myself as a part of the whole. This is a huge topic for me and I could go on and on and on about it. The complexities of the human experience are so fascinating. There is certainly no rule book for how to attain that connection and various paths that we can choose to walk that way in the world. Permaculture is one powerful example of humans digging at that connection. and so many more…

    thanks for this Rob!

    • interpretercam

      Have you read any Charles Eisenstein. He explores a lot of what you are talking about. He calls our biased human exceptionalism “the story of separation” and we as a society and planet are ready for a new story.

  • Jackie

    Thanks for getting this conversation going, Rob – you bring up some great points. What I struggle with is this concept that AI taking over most aspects of our lives is inevitable. How can something that we can choose to pursue or not pursue be inevitable? Who is orchestrating this march toward increased mechanization (it’s not me!)? I think an economic model that incentivizes efficiency and low cost above all is at least partly responsible for this trajectory we’re on. That model seems ripe for a shake-up as well. You’re right – we need to be talking about what future we do want. Alternate visions are shared by many!

  • Jackie

    Thanks for getting this conversation going, Rob – you bring up some great points! What I struggle with is this concept that AI taking over nearly every aspect of our lives is inevitable. Who is orchestrating this inevitable march toward a mechanized future? Should it not be us collectively who decide what direction we want to go? I guess that’s why you’ve initiated the conversation! I think the economic model we have that incentivizes increased efficiency and decreased cost above all is at least partly responsible for the trajectory we’re on. That system seems ripe for a shake-up as well.

Leave a Comment


Start typing and press Enter to search

Hope, Knowledge & A Plan...
Our next Permaculutre Design Course starts on June 23! 
Learn More!