For the past few years, artificial intelligence (AI) has emerged to play a major role in many areas of scientific research. This trend has affected our lives through common devices like smartphones with built-in smart assistants like Siri, Google Assistant, or Alexa that can understand and complete our requests. Smart homes also continue to get smarter and more sophisticated from learning our behaviors. Meanwhile, AI technology is bringing autonomous vehicles and drone delivery services into reality, leading us to new ways of life we’ve never imagined before.
However, the term “AI” is not exactly a new technology. It was introduced as “machine learning” in the early 1980s, along with successful methods involved such as data analysis, modeling and backpropagation that enabled AI to spread and be widely used in software and applications.
Dr. Huei Peng presented his view on the use of physics for AI applications in an invited talk at ITRI.
“AI comes in cycles,” noted Dr. Huei Peng, Professor of Mechanical Engineering at University of Michigan and Director of Mcity, during his speech at ITRI. The hype and bust (boom) of AI come in several cycles, he said, explaining there is Boom 1 (known as the good-old fashioned AI, GOFAI), Boom 2 (the so-called “expert system”) and Boom 3, where we are right now. Dr. Peng stated the necessity to recognize how the boom will grow and what its outcome would be, because these thoughts will guide us to identify what is new in AI.
“There is the need to leverage our traditional knowledge completely,” added Dr. Peng. The “traditional knowledge” mentioned here may refer to the methods in physics. Being educated as a mechanical engineering professional, Dr. Peng believes that the enhancement of AI can be done through either the old way of sensing or through a control approach which uses model analysis or mathematics shown in block diagrams. In the meantime, the emerging trend of data/AI-driven approach like new coding paradigms including Python is widely used for all kinds of AI applications.
As the use of AI methods like fast computation, big data, backpropagation and deep learning gains popularity, AI is undoubtedly on an upswing. One of its recent triumph moments is the AlphaGo event in which an AI scored a huge win against the 18-time world GO champion Lee Sedol. Even three decades earlier, IBM’s Deep Blue has beaten the world champion chess player, Garry Kasparov. Indeed, AI achievements are impressive enough that even Dr. Peng acknowledges them by saying, “AI does do something that our traditional way doesn’t know how to solve.”
However, Dr. Peng stressed that there are still many challenges regarding the fragility and slow or erratic convergence problem of AI. To be more specific, he gave an example of the end-to-end autonomous driving results for autonomous vehicles. These test-drives involve recording a period of 20 to 100 hours of actual driving in places like California. The whole scene layout is provided for neural networks to learn through street-view related datasets, yet almost all of these reinforcement learning results show erratic convergence.
To solve this issue, Dr. Peng stated that there is the need to incorporate physics in training deep neural networks (DNNs). The autonomous driving test results can be enhanced by adding approaches like model-enhanced cost function, parallel and polynomial methods to calculate lane curve shape or the number of lane lines. Dr. Peng indicated that the end results in comparison with other benchmarks appear to be a “few percent better.” This also implies that sometimes training DNNs doesn’t always guarantee better results, or as Dr. Peng commented, “Sometimes they can go horribly wrong or become worse.”
As advancements in AI continue, Dr. Peng concluded that from a mechanical engineer’s point of view, one should never just blindly collect big data and claim victory. Rather, he said, “We should always remember the value of the old knowledge and never stop learning new things!”