Letter New Wilderness Requires Algorithmic Transparency: A Response to Cantrell et al. Victor Galaz 1,2, * and Abdul M. Mouazen 3 Can rapid advances in robotics, increas- ingly sophisticated algorithms, and expo- nential increases in data availability help us create and maintain wild places? In their recent Opinion article, Cantrell and colleagues [1] explore the potential for fully automated systems to create and sustain new forms of wild places without ongoing direct human intervention(p. 1). They also elaborate how these systems could contribute to new ways of automat- ing environmental management and cre- ating a new wildness. Their proposed automated curation of wild placesis thought-provoking and raises a number of new questions about how such systems could be developed in ways that ensure algorithmic transpar- ency, accountability, and public deliberation. Algorithms and Human Inuence As the authors note, wildcould be understood as a state of existing in rela- tive freedom from human interventions (p. 1). Whether such autonomy can be achieved at all through the design of intel- ligent autonomous systems such as toxic cleanup swarm robots, drone-facili- tated reforestation, and automated agri- culture remains an open question since such systems are likely to be far from freefrom human intervention. The main reason for this is the fact that these intelli- gent systems build on algorithms step- by-step sequences of operations that solve particular computational tasks (for more elaborate denitions, see [2,3]). Often, the assumptions embedded in these algorithms result in awless auto- mated decisions. At times, however, algorithmic decisions may lead to damag- ing social and environmental consequences. As studies by scholars of machine learn- ing and articial intelligence (AI) have shown (e.g[47_TD$DIFF]., [36]), the design of intelli- gent systems (such as those explored by Cantrell and coauthors) are dependent on the assumptions inserted into the algorithms and the data that provide the foundation for their training and oper- ation. For example, if the dataset used for machine learning is incomplete, biased (which is often the case), or unable to capture the dynamic, changing nature of the system of interest, the resulting actions may well be damaging. These risks have become clear in sectors where automated systems have made rapid progress, such as nance and crime pre- vention. Examples include algorithms that, as a result of biased input data, discriminate against persons of color by providing unjustiably low credit scores or lead to an incorrect assess- ment of the likelihood of such a person committing a future crime [3,6,7]. Some intelligent systems may even, despite their algorithmic sophistication, inherit and amplify gender discrimination due to biases in the learning data [4]. A Call for Algorithmic Transparency and Governance Hence, even highly autonomous and intelligent systems such as those elabo- rated by Cantrell and colleagues are prone to mistakes. This leads to a num- ber of important questions worth further elaboration. For example, how do we ensure that the datasets used for machine learning in the design of these automated wilderness curators do not contain serious mistakes or biases? Who is responsible for overseeing that the complexity of the socialecological system of interest is properly captured in the algorithms implanted in the curator? If irreparable damage is done to an ecosystem and/or the communities dependent on it, who should be held accountable? Also, which key principles should guide the decisions made by an autonomous curator? For example, should automated farming prioritize opti- mization (e.g., monocultures) or resil- ience (e.g., multifunctional landscapes)? How should such autonomous systems address ecosystem services tradeoffs between, for example, instrumental and esthetic or spiritual values? Clearly, not even articially intelligent autonomous systems will be able to resolve these deeply value-laden tensions. In addition, it should be noted that there may even be an imperative tradeoff between algorithmic effectiveness and transparency. That is, the algorithms that have been proven to be the most pow- erful (e.g., neural networks) also tend to be those that are most difcult to inter- pret, and therefore scrutinize due to their operation logic ([5], see p. 93). This cre- ates a new wildernessparadox: the higher the technological potential for automated curation of wild places through, for example, AI and robotics, the stronger the need for human supervision. It should be noted that a suite of sophis- ticated machine learning, genetic learn- ing, and 3D object recognition algorithms are already being used to support landscape planning, conserva- tion decisions, sh stock assessments, deep sea mining of rare earth minerals, and precision farming. Their use is likely to continue to increase in sectors that shape the biosphere. This not only forces us to rethink what we perceive as wildernessin the Anthropocene. It also urges us to recognize the urgent need for algorithmic transparency and continued human overview. Such a vision and code of conduct, or a bio- sphere code[8], needs to be developed in parallel with the rapid pace of tech- nological change. Only then will we be TREE 2270 No. of Pages 2 Trends in Ecology & Evolution, Month Year, Vol. xx, No. yy 1