In May 2024, the Saint Pierre International Security Center (SPCIS) initiated a series of interviews titled “Global Tech Policy at the Forefront: Conversations with Leading Experts.” This initiative aims to deeply understand the influence of emerging technologies such as Artificial Intelligence (AI), blockchain, biometrics, and robotics on global governance and transnational public policy. This series features leading experts in the field from academia and industry. The interviews will be published on SPCIS’s social media platforms, including their WeChat public account, website, and LinkedIn.
On July 15th, Dr. Naikang Feng, currently leading the Global Technology Policy Study at the SPCIS, conducted an in-depth interview with Dr. Ville Satopää, Associate Professor of Technology and Operations Management at INSEAD. They engaged in a comprehensive conversation about the challenges and opportunities presented by AI in the realms of forecasting, decision-making, business transformation, and ethical considerations.
Professor Satopää is an Associate Professor of Technology and Operations Management at INSEAD. Before joining INSEAD, he received his MA and Ph.D. degrees in Statistics from the Wharton School of the University of Pennsylvania. He also holds a BA in Mathematics and Computer Sciences from Williams College. For the past few decades, Professor Satopää has been researching predictive analytics and probabilistic reasoning – the kind of components that are at the core of modern Artificial Intelligence. Recently, he has explored the potential of Large Language Models in making forecasts of future geopolitical events and investigated how AI predictions of clinical trial outcomes can benefit from further human input. He has also been teaching extensively in Executive Education Programmes since joining INSEAD. Specifically, he teaches and directs a program called Artificial Intelligence in Business – one of the world’s first AI courses targeted at executives. Additionally, he has co-developed an online course called Transforming Business with Artificial Intelligence.
This article summarizes the key messages from the interview, and the content has been reviewed and authorized for publication by Professor Satopää.
Q: Thank you, Professor Satopää, for accepting our interview with the Saint Pierre International Security Center. As an expert in forecasting, what are the fundamental challenges facing traditional judgmental and statistical forecasting in your view? In what ways can the incorporation of AI help overcome these challenges or potentially exacerbate them?
Forecasting ultimately is about collecting as much information as possible and then representing that information accurately as a forecast that can take on different formats such as a single number or even a full distribution. Information, however, can present itself in many different forms, creating fundamental challenges for traditional judgmental and statistical forecasting.
Consider, for instance, demand planning in a company. Traditional statistical forecasting tools such as exponential smoothing, ARIMA, or other time-series techniques can be very powerful in capturing trends and seasonality in past demand of a product. However, often we have relevant information beyond the demand history. For instance, we may have access to pricing, promotions, unstructured customer feedback, economic conditions, weather information, and so on. All this can be hugely relevant to forecasting. Whereas it is not straightforward to incorporate such information into the traditional statistical tools, AI models can make use of all such data. The downside is that an AI model often requires much more data than the traditional statistical methods. Whereas these traditional techniques can operate based on the demand history of a single product, AI is often based on so-called “global” models that input the demands of many products at the same time.
Of course, both traditional statistical and AI tools can use only information that can be represented as numbers or data. This can be a missed opportunity! Consider, e.g., store managers’ years of experience on interacting with the clients. They may have a strong feeling about what the clients are looking for even though this has not been recorded in any database. And even though humans cannot process large amounts of data like machines can, they can incorporate intangible or qualitative information that cannot be expressed as numbers or data. The problem with humans is that the conversion from whatever information they have to a numerical prediction may not be very precise and often suffers from various cognitive biases.
This all suggests that together machines and human can incorporate almost any information from the environment. But how to make the best use of these two worlds? That is, how can we allow machines and humans interact in ways that allow us to capture all information that is available? This is an open question. Some suggests that we should reduce influence and aggregate machine and human predictions that have been made independently. Others suggest that we should first make a machine prediction and then allow the human to adjust that. What works and when is not clear!
Q: Humans make satisfactory decisions with bounded rationality, while machines use deliberately applied heuristics in algorithms, which may fail to produce optimal solutions. Is bias in machine predictions unavoidable? What strategies can be employed to mitigate these biases?
Unfortunately, bias is hardly ever fully unavoidable due to biased training data (e.g., only a subset of the full population is represented in the data), algorithmic bias (e.g., data scientists must make some subjective choices during development), and human biases (e.g., broader societal bias or prejudices can seep into the data). There are many ways to mitigate bias. For one, we should make sure that the data we use is representative of the entire target population. Second, we can carry out bias forensics to check that sensitive variables (such as gender or race) are used fairly. For instance, does the algorithm show the same false positive rate for both men and women? Such targeted testing can help reveal problems and point us in the direction of improvement. Lastly, if the algorithm is transparent and explainable, we can understand why it makes the decisions it does, allowing us to more easily verify that those decisions are indeed fair.
Q: How do you envision AI transforming business models over the next decade, and what industries do you believe will be most significantly impacted?
AI is likely to bring about higher levels of personalization of products and services, shifting from mass production to mass customization. AI-driven automaton will streamline operations and increase efficiencies across various business processes, from supply chain management to customer service. For instance, as mentioned earlier, in inventory management, it is possible to envision a solution that forecasts future demand of a product based on past demand or sales history, customer feedback, client demographics, promotions, economic conditions, and other variables, and then uses that prediction to optimize the production plan. Other business applications include virtual AI assistants and improved client routing in customer service, helping human resources to filter job candidates, predicting sales actions, and optimization of marketing campaigns.
When thinking about AI applications, it is important to keep in mind that AI is based on data and adheres to the so-called ``garbage-in-garbage-out’’ principle. Unfortunately, many, if not most companies today are not AI ready because their data quality is not good enough. My belief is that in the near future there will be much more focus on data quality management. This can naturally open new business opportunities, say, in automated data cleaning and management.
Q: What strategies do you recommend for organizations to foster a culture that embraces AI and innovation? How can leaders effectively manage the organizational changes brought about by AI adoption, particularly in terms of workforce adaptation and skill development?
This is an enormous and challenging topic. But if I had to say one thing, I’d highlight the importance of training. And this involves everyone – not just the technical people. Everyone should have a basic understanding of what AI is, how it works, what the company is planning to do with AI, and how that will affect everyone’s job security. There is a lot of (over) hype of the capabilities of AI. Training the workforce with basics of AI can help them see through this hype, begin to treat it more as a friend than a foe, and begin to find helpful use cases in their own everyday work. Online courses, such as our Transforming Business with Artificial Intelligence, can scale to a large number of employees.
Executives, managers, and technical staff can then look for further education and take their understanding of AI beyond the basics. As with so many cultural changes, the role of the senior leadership is crucial and leading by example is so powerful. Indeed, many success stories involve a leader who makes data driven decisions. A CEO who backs up their proposal with data can set the right tone and is on the path to fostering the right kind of culture.
Q: How do you envision the future where AI processes superintelligence that goes beyond mimicking human behavior?
Such future looks both bright and dark. Superintelligent AI could enhance the overall quality of life by improving healthcare, education, and other public services. Of course, this assumes that we can align such AI with human values and keep it under control. And even if the AI would be well-behaved, there is always the threat of malicious human users. Now, all this is hypothetical as such superintelligence is at most theoretical at this point.
Overall, I find intelligence to be a very loaded term; there are many ways to be intelligent. It is likely that machines will always excel in some dimensions, while humans will continue to excel in other dimensions. In fact, this is what is already happening: AI can process data with the kind of speed, quantity, and rigor that is unimaginable to humans. But still today AI has many limitations. For one, AI does not possess human-level understanding of the cause-and-effect relationships in the world. How can we embrace these differences and find the right synergies between the two approaches?
Q: Much discussion has been done on AI ethics and regulation. What are the most pressing ethical and regulatory considerations that businesses must address? How can policymakers ensure the ethical use of AI while not hampering innovation?
Businesses must ensure fairness and avoid bias in their AI systems, protect users’ privacy and data security, maintain transparency and explainability in AI decision-making, and ensure accountability for any AI-driven actions. Businesses must also take into account potential job displacement due to AI and seek out to develop practices that mitigate adverse social impacts.
Policymakers should establish clear guidelines for responsible development and use of AI, encourage both transparency and public accountability, and foster interdisciplinary collaboration between technologists, ethicists, and policymakers. To avoid becoming obsolete, regulations should be updated regularly to keep pace with technological advancements. And, to avoid stifling innovation, the level of oversight should also be proportional to harm (similarly to the recent EU AI Act).
Editor: Naikang FENG
July 16, 2024
Comments