Jun 11 / Benjamin Schumann

AI, Silicon Valley and the future of simulation

I should grow a beard, I know. Why? Well, it is a bit of a cliché that top German CEOs travel Silicon Valley to get inspired by the buzz.
Empty space, drag to resize
And the most substantial thing they bring back is a thick, long beard to show that they became “agile”…While I am no CEO, I did have the privilege to make a dash visit to my employer’s San Francisco office recently to attend our first “AI day”. And although I am not tempted to grow a beard in the slightest, I found it very inspiring indeed. Not least because simulation was an important part of the conversations, lessening my imposter syndrome. Here are some nuggets worth sharing as well as my perspective on how things could go in the coming years for simulation experts.

The AI day was essentially several panel discussions with legends of AI and Silicon Valley sharing their views on the current state of affairs, challenges and an outlook of where things are moving. Each of the invited guests had very interesting things to share so let’s take them in order.
Write your awesome label here.

MARISSA MAYER

Marissa Mayer and Hans Peter Brodmo
Empty space, drag to resize
Even I didn’t need an introduction to Marissa as she was a leader at Google for many years before taking up the role of CEO at Yahoo during challenging times. However, I didn’t know she was almost going to take up an offer with McKinsey had it not been that Google’s offer was even more interesting.

Moreover, she studied AI “before it was cool” so her insights into the current hype are credible. When challenged on the ethical questions around AI (“Should a robot be allowed to kill?”), she mirrored my thoughts on the issue: in short, the benefits outweigh the dangers. But bad things will happen and are already happening. She likened the development of AI to humans yielding fire: it caused huge advantages, new jobs, essentially the modern world we live in today. But sometimes, people burn and we do everything we can to avoid that.

HANS PETER BRØNDMO

Hans Peter currently works for Google X but has held many positions with large companies such as Nokia and Apple, but also founded various startups as early as 1993.

He currently explores a “10x” idea around robotics and machine learning and shared a video of robots learning how to pick up various objects. Interestingly, they use reinforcement learning: they start out moving randomly until at some point, the manage to pick up an object, receiving a virtual reward. This leads to them memorizing that movement and re-applying it next time. I felt very smug because the poster I presented showed an AnyLogic-based reinforcement-learning simulation model. I was in the right place after all.

Anyway, because learning to pick an object is very slow in reality, they use simulations to let algorithms learn. They even learn that sometimes, they have to move another object out of the way first. This simulation-based learning (he called it “overnight dreaming of the robots”) is then transferred to the actual robots and he hinted that this is still not straight-forward.

Maybe, us simulation experts could help here? I could imagine that the simulations are highly realistic physical models, in this case. But machine learning is inherently bad at picking up causality, which is the root of all physical processes. So maybe, we have to teach the physical robots to not only apply the reinforcement-learning outcome from the simulation but understand, what drives reality. What is causing one thing to block another?

Jeremy Howard

Jeremy Howard (left)
Empty space, drag to resize
Jeremy is another legend as he led Kaggle for several years and founded the very successful Fast.ai platform. His mission is simple: make AI uncool again. No typo here! He wants to make it inclusive (cool = exclusive) to us normal people. No need for big data. No need for advanced algorithms. Everybody who can use basic Excel should be able to apply AI.

With this great vision, he started to smile when I asked “How do you see the role of simulation in the world of AI?”. His literal candid answer was “I love simulation”. And he explained how he used System Dynamics 25 years ago at McKinsey. In his view, simulation is a crucial part of AI development not only as a training environment for algorithms but also as a testbed (what happens if we have cars talking to each other?). However, in the spirit of his mission, he cautioned that it is still too hard to use simulations for normal people.

While I agree that too few people include simulation into their tool portfolio, I do not follow that the tools are too hard. I suppose Jeremy hasn’t seen recent tools such as Flexsim, Simio or AnyLogic, all which can produce amazing results from pure drag-n-drop of objects.

I think, the reason that simulation is still not widespread has to do with education. But this will be a future blog post.

Some reflections

However, through the event I realized that we, as a community, have a huge chance with AI. Currently, AI practitioners see simulation as a means to an end. They employ it to teach algorithms faster. However, we can show them that simulation should be used to shape AI. We can use simulations to predict what happens when the AI-driven applications are used. What if we have autonomous cars on the road? How will a factory work when every IoT screw can talk to any other? What unexpected consequences will we see by chatbots in call centers?

The AI community doesn’t think about their impact on society in a systematic way. While I agree that the benefits will far outweigh the dangers, implementing AI into everyday life is an experiment within the complex system that is reality. And there is only one tool that can help explore complex systems: Simulation.
Write your awesome label here.
Created with