Uncovering JAPA

Ethics of AI-Enhanced Planning

Planners constantly seek out new data and skills, prompting and responding to technological and ideological shifts in the field. At our best, planners use new types of data and analysis in ways that minimize bias and align with the communities' ethical principles. In this way, AI is nothing new.

However, the rapid rollout, ubiquity, and efficiency of AI tools may lead to a lack of human oversight and the critical, balanced thinking needed for responsible planning. Planners can proactively create procedures that check AI systems' potential to perpetuate biases, obscure decision-making processes, and infringe on privacy.

In "The Ethical Concerns of Artificial Intelligence in Urban Planning" (Journal of the American Planning Association, Vol. 91, No. 2), Thomas W. Sanchez, Marc Brenman, and Xinyue Ye highlight ethical concerns from existing literature on AI-driven urban planning.

In AI We Trust?

Many planners' favored software programs already integrate AI. The challenge for planners is to adopt ethical planning strategies for the continued use and incorporation of AI in urban planning.

Trained on large databases and historic human behavior, AI can replicate pre-existing biases. If these data are biased, the AI's decisions may perpetuate and even amplify these biases. The challenge is to ensure that systems are developed with an awareness of potential biases.

Transparency about data sources, data quality, and the potential for bias is crucial for ethical AI use in urban planning. Planners must continually monitor and adjust algorithms to address biases and outcomes as they emerge.

In some cases, human-in-the-loop algorithms, which include human intervention or review, could be utilized to maintain control and oversight. The human-in-the-loop paradigm underscores the critical role of human experts in guiding, intervening, and even correcting AI output and decisions.

In short, planners should plan to conduct regular ethical audits and reviews of AI systems to mitigate inherited bias.

Figure 2: Human-in-the-loop intervention.

Figure 2: Human-in-the-loop intervention.

AI in urban planning has the potential to either reinforce existing inequalities or reduce them. For example, AI could inadvertently accelerate the process of raising property values and rents, displacing existing residents and businesses. Human oversight is needed to ensure AI does not inadvertently accelerate forms of gentrification or displacement by identifying areas for redevelopment.

Urban planners can proactively develop affordable housing initiatives, tenant protections, and community land trusts. Addressing this issue requires planners and organizations to be vigilant in identifying and mitigating biases within AI systems. This may involve refining algorithms, diversifying training data, and implementing regular audits to ensure fairness and equity in decision-making processes.

Privacy Rights and Transparency

Vast amounts of data collection are used by AI systems. To analyze traffic patterns, planners pull data from public records, sensor networks, social media, and mobile applications. These data may include sensitive information about people's movements, habits, and social interactions that AI may capture in high resolution.

There are methods for de-identification and anonymization to eliminate personally identifiable information from data sets. However, it is essential to recognize that de-identification is not infallible.

Planners will have to seek a balance between data collection for insightful analysis and respecting individuals' privacy. This is an ongoing ethical question that planners should proactively consider and implement.

The authors stress that individuals should be fully informed about the data being collected, the purpose, and how their information will be used. Urban planners and planning organizations bear the ethical responsibility to institute strict data security measures and be transparent in how they collect and use data. Not only is security an ethical imperative, but it is also a legal requirement in many jurisdictions.

Building Strong Ethical Frameworks

The authors urge urban planners and AI developers to engage directly with their communities in a co-creation approach.

They want to ensure that AI-driven solutions are grounded in the real-world experiences and challenges of those they aim to serve. Community involvement can help identify potential biases in data and decision-making, ensuring that the AI's recommendations do not inadvertently harm marginalized groups. Involvement can help demystify AI, making it a tool for the community rather than a force acting upon it.

This commitment to inclusivity and transparency is intended to build trust, incorporate diverse perspectives into planning decisions, and avoid adverse impacts on marginalized communities. It is up to planners to think critically and proactively to build a strong ethical framework from which to implement AI.

KEY TAKEAWAYS

  • Proactively implement strong ethical guidelines for AI use in the planning process.
  • Develop robust privacy and data security policies.
  • Prioritize transparency and accountability by engaging with diverse community groups.
  • Ensure inclusive datasets to mitigate bias.
  • Continue to monitor AI algorithms by conducting regular ethics audits and reviews.
  • Demystify AI and build public trust through public engagement and education.

Additional APA Resources

For further interrogation of AI and planning ethics, please refer to other APA Resources:

Top image: Photo by iStock/Getty Images Plus/ Thinkhubstudio


ABOUT THE AUTHOR

Grant Holub-Moorman is a master's in city and regional planning student at the University of North Carolina at Chapel Hill.

February 27, 2025

By Grant Holub-Moorman