What is often not appreciated is that the arc of technology goes back to history. Although the construction of AI may be rational (think mathematical equations), it is the arcane logic of society that truly fuels and supports this technology. This is most evident from the role of data as the basis for artificial intelligence.
Data is a social construct, and it is never neutral because it depends on who produces it, transmits it, and even reconstructs it.
In colonial Southeast Asia, officials collected data through census surveys and mapping exercises for policymakers in distant capitals who were involved in empire-building. From these data emerged narratives that often reduced colonial subjects to stereotypes, paving the way for conquest and, later, for divide-and-rule policies.
The legacy of this framing casts a shadow over many modern states in Southeast Asia decades after independence. The categories that were used to classify us under colonial rule form the basis of many demographic indicators in official documents today.
In Malaysia, for example, children of mixed marriages are usually officially recognized by only one race. This not only erases a significant portion of a person's cultural heritage, but distorts their entire identity for the sake of bureaucratic expediency.
Policymakers across Southeast Asia appear to recognize the risks of incomplete, inaccurate, or biased data. The recently released ASEAN Guidance on AI Governance and Ethics calls for a focus on humans and caution against bias. National strategy documents echo similar messages related to maintaining equity, inclusion, and justice. Even local tech champions are committing to the ethical use of AI.
But connecting the square hole of ethics to the round hole of business can be difficult. Mitigating data bias requires that data sets be comprehensive. Even if it is proven, at a minimum, that the quality of the data cannot be compromised, there must be informed and explicit consent to the collection of data for specific purposes and not for general purposes.
There must also be consensus on who owns, manages or governs this data. This can be a particularly sensitive issue regarding cultural knowledge or traditional practices of historically marginalized or exploited groups.
Computer, your biases show
Computer, your biases show
While ethics and economic prospects are not mutually exclusive, they do not always mesh well. For example, AI programs that mask or erase local dialects so that, say, customer service representatives in the Philippines can be better understood by their primarily Western customers, raise dilemmas.
On the one hand, it facilitates networking and provides job opportunities to a wider range of recruits. On the other hand, it highlights troubling considerations about global power dynamics and hierarchies. It also reduces cultural differences for purely utilitarian purposes.
Perhaps the real question here is not how to enhance human centrality by ensuring that there are guardrails surrounding the use of AI. Rather, it is whether framing discussions about AI governance within mainstream, profit-driven business models can provide meaningful guarantees of social and, ultimately, global justice.
As data collection practices of the past, built on an economy of imperial extraction and exploitation, find digital parallels in today's geotagging, biometric surveillance, and behavioral recognition, stakeholders in Southeast Asia will benefit from expanding deliberations around AI beyond monetary calculations.
Ideally, digital technology, including AI, will be viewed equally across all three political, security, economic, social and cultural pillars of ASEAN.
Interdisciplinary consultants from diverse backgrounds and geographies, which the ASEAN Handbook recommends should include historians, anthropologists and other experts in the humanities. Moreover, policy conversations about AI governance in a region as diverse as Southeast Asia with its recent colonial past would benefit from exchanges with other parts of the Global South.
There is much to gain from contemporary African studies that calls for its inclusion Ubuntu – the idea of collective values as well as social and environmental connections – in the governance of artificial intelligence, the impact of indigenous knowledge on smart agriculture, and the experience of Latin American communities with algorithmic decision-making systems built in the Global North.
By informing discussions on AI governance, the algorithms of Southeast Asia's future should be more than just a digital replica of its past.
Elena Nour is a senior fellow in the Asia Program at the Carnegie Endowment for International Peace