Four years ago, Elon Musk famously predicted that artificial intelligence will overtake human intelligence by the year 2025.
“We’re headed toward a situation where AI is vastly smarter than humans and I think that time frame is less than five years from now,” he told the New York Times.
Musk has also repeatedly warned of the potential dangers of AI, even invoking the “Terminator” movie franchise by way of illustration.
And yet, the very same Elon Musk recently unveiled the prototype for a distinctly humanoid Tesla Robot, which he hopes will be ready in 2022. Speaking to an audience at Tesla’s AI Day in August, Musk quipped that the robot is “intended to be friendly,” and added that it will be designed to “navigate through a world built for humans” – alluding to his previous, apparently still-extant concerns.
Of course, Musk’s fears about AI aren’t shared by everyone. Fellow tech entrepreneur Mark Zuckerberg has distinctly different views on the matter. On the other hand, Musk isn’t alone, either; Stephen Hawking once famously warned that AI could ultimately “spell the end of the human race.”
So what can we take away from this confusing discourse about AI? Is artificial intelligence the savior of humanity? Or are we about to get conquered by an army of drones?
The truth is (probably) a lot less theatrical – but arguably no less dramatic.
Let’s face it: AI can do things humans can’t – especially when it comes to data
The misleading thing about these types of high-profile, philosophical debates about AI is that we actually have a long way to go before what Hawking referred to as “full artificial intelligence” is even developed – let alone mass-introduced into the marketplace.
Undeniably, however, the vast potential of AI is as much recognized by experts as it is taken for granted by the general public. Machine learning and other forms of AI are already defining many aspects of our daily lives, from the way we communicate with others to our ability to get to work on time, to how we shop, work, and even acquire knowledge.
In unveiling his Tesla robot, Musk offered a pretty succinct summary of the core benefits of AI in general, asserting that the robot’s purpose will be to take over “unsafe, repetitive, or boring” tasks that humans would rather not do.
That summary is applicable to almost any AI application you can think of: taking over tasks that humans either never really enjoyed doing, or weren’t ever that great at in the first place. A classic example is food assembly lines: humans get tired, bored, make mistakes, and have potentially dangerous accidents – all things that robots either don’t experience at all, or (in the case of accidents) experience less often, with costs measured in terms of financial losses rather than human lives.
But a far better illustration of this reality is in the world of data. In the days before “big data” became a buzzword, there was hope that the explosion of information would immediately usher in an era of true enlightenment. Finally, human beings could have all the data they needed at their fingertips to make the optimal decisions every time.
Of course, that’s not what happened. Instead of being liberated by “big data,” we became hostages to it. From the spam clogging our email inboxes to the blur of graphs, charts, and tables that to this day form the core challenge for almost every business.
Then came artificial intelligence, and with it, the key to unlocking the potential of that ocean of data. And herein lies both the immense promise of AI, as well as the fear of “Terminators” and robot-driven unemployment: AI, particularly in the form of machine learning algorithms, is infinitely better at analyzing data than human beings are.
The bottom line: Artificial intelligence helps humans make better decisions
While philosophical debates between tech heavyweights naturally make the headlines, the current daily reality is far more benign. In practice, AI is mostly being used to empower humans, not sideline them.
Take the food manufacturing example above. Yes, it’s true that many food assembly lines are now dominated by machines rather than people, much in the way the Industrial Revolution did away with other menial jobs. But just as the Industrial Revolution paved the way for a more prosperous future, rather than one of mass unemployment (as many feared at that time as well), the Industrial Artificial Intelligence Revolution is enhancing and improving the lives of food manufacturing teams, rather than rendering them redundant.
Using AI, food manufacturing teams are better able to excel at their jobs – which of course benefits them, their employers, and ultimately the consumers who benefit from a greater quantity and better quality of product.
I’ve seen this firsthand. My company, Seebo, is part of this “Fourth Industrial Revolution.” Our proprietary Process-Based Artificial Intelligence™ is enabling global leaders in the food industry to reduce production losses like waste, yield, and quality, saving them millions each year. At the same time, they’re using our technology to become more sustainable: cutting emissions, reducing energy consumption overall and significantly reducing food waste.
And as with many other applications of machine learning AI, it’s all about the data. In the case of food manufacturers, it means using Seebo’s AI to reveal the hidden causes of these food production losses, high emissions, and so on – insights that were previously unavailable due to the complex nature of food manufacturing data. Armed with those insights, process experts and production teams are able to make the right decisions in real time: to know when to adjust the process or maintain certain set points that they may otherwise have neglected or overlooked.
AI: Empowering us to do better
Of course, as the saying goes, “with great power comes great responsibility.”
From the wheel to the printing press to nuclear power, technological advancements always have the potential for good or bad. In that sense, AI is no different; where it differs is that its full potential is largely unknown. We’ve still yet to tap into the full potential of this technology, so it often feels like a sort of black magic.
But I do believe that the current trajectory is very much for the good – but more to the point, we don’t have a choice.
Humanity today faces two simultaneous global challenges. First, a population crisis – with the global population set to swell 25% by the year 2050 on the one hand, while on the other hand many countries (most notably China) face a rapidly aging population. And second, a rising climate crisis, as countries and industries struggle to cut carbon emissions while maintaining the productivity necessary to sustain those growing and aging populations.
In this struggle, artificial intelligence is perhaps our greatest ally. I’ve seen up close its potential to empower better decisions, bridging the gap between seemingly opposing goals – like reducing emissions while producing more, not less.
Far from conquering us, AI is humanity’s best chance of overcoming some of our greatest food manufacturing challenges today.
by Lior Akavia
The level of volatility will not slow down in 2022. New Covid variants will continue to emerge and may cause workplaces to temporarily go remote again. Hybrid work will create more unevenness around how much different employees are working. Many will have real wage cuts as annual compensation increases fall behind inflation. All these will be layered on top of technological transformation, DE&I journeys, and ongoing political disruption and uncertainty.
1 January may seem like an arbitrary date to start self-improvement, but there are good psychological reasons for doing so. For those who don’t follow this tradition, the very act of creating a New Year’s resolution can seem illogical. Recent psychological research, however, suggests that there are many good reasons to begin a new regime on the first day of a new year. And by understanding and capitalising on those mechanisms, we can all increase our chances of sticking to our new goals for 2022.
Africa’s digital transformation offers an exciting opportunity, but success requires homegrown enterprise solutions that reflect the context and nuances of the continent’s market needs, including mobile integrations, a variety of regulatory and legal requirements across the continent, and the lack of a uniform data governance framework. The author offers advice for African businesses looking to develop these solutions.