the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Software sustainability of global impact models
Abstract. Research software for simulating Earth processes enables estimating past, current, and future world states and guides policy. However, this modelling software is often developed by scientists with limited training, time, and funding, leading to software that is hard to understand, (re)use, modify, and maintain, and is, in this sense, non-sustainable. Here we evaluate the sustainability of global-scale impact models across ten research fields. We use nine sustainability indicators for our assessment. Five of these indicators – documentation, version control, open-source license, provision of software in containers, and the number of active developers – are related to best practices in software engineering and characterize overall software sustainability. The remaining four – comment density, modularity, automated testing, and adherence to coding standards – contribute to code quality, an important factor in software sustainability. We found that 29 % (32 out of 112) of the global impact models (GIMs) participating in the Inter-Sectoral Impact Model Intercomparison Project were accessible without contacting the developers. Regarding best practices in software engineering, 75 % of the 32 GIMs have some kind of documentation, 81 % use version control, and 69 % have open-source license. Only 16 % provide the software in containerized form which can potentially limit result reproducibility. Four models had no active development after 2020. Regarding code quality, we found that models suffer from low code quality, which impedes model improvement, maintenance, reusability, and reliability. Key issues include a non-optimal comment density in 75 %, insufficient modularity in 88 %, and the absence of a testing suite in 72 % of the GIMs. Furthermore, only 5 out of 10 models for which the source code, either in part or in its entirety, is written in Python show good compliance with PEP 8 coding standards, with the rest showing low compliance. To improve the sustainability of GIM and other research software, we recommend best practices for sustainable software development to the scientific community. As an example of implementing these best practices, we show how reprogramming a legacy model using best practices has improved software sustainability.
- Preprint
(1430 KB) - Metadata XML
-
Supplement
(12634 KB) - BibTeX
- EndNote
Status: closed
-
CC1: 'Comment on gmd-2024-97', Tijn Berends, 14 Jun 2024
Unsollicited review by Tijn Berends
I am glad to see a manuscript like this. With the ever-increasing political and societal demand for new, more accurate scientific knowledge about the Earth system, and particularly its future state, the complexity of computational models has grown exponentially over the past few decades. The need for software engineering skills, on top of the knowledge of the scientific domain, and the wide and varying set of skills required of an active research scientist, is now an undeniable reality. New literature investigating just exactly what “software engineering skills” entails in the context of research software is therefore a valuable addition.Having the luxury of being an uninvited reviewer, I can constrain myself to only pointing out the bits that really strike me, and leave the detailed feedback to the invited reviewers. Two points stand out to me in this manuscript that I think could do with some improvement.
Firstly, there is the concept of “self-explanatory code” mentioned in lines 406-413. While I appreciate that the authors are merely citing another group’s description of that group’s own work, I believe this statement needs a disclaimer. Depending on how you define “self-explanatory”, either all code qualifies as such, or none. If, for example, we define code as “self-explanatory” when one can eventually arrive at an understanding of its functionality without consulting the original author, then all code is self-explanatory – with, of course, the caveat that “eventually” can, in many cases, be prohibitively far into the future. On the other hand, if we define code as “self-explanatory” when we require no other resources to (again, eventually) understand its functionality, then probably no code ever meets this definition, at least in the context of research software, which always requires a substantial level of background knowledge on the part of the developer. E.g., is the code that calculates the sea-level equivalent volume of an ice sheet truly self-explanatory if it does not explain the concept of sea-level equivalent volume? In my view, these considerations illustrate that the phrase “self-explanatory code” is so difficult to define as to be practically meaningless. In my experience, it is used mainly by people who inherited code from their supervisor that is not as well-commented as they’d like it to be, but cannot say so out loud for fear of their career prospects. I’m sure the authors can add these considerations, possibly in a rephrased manner, to their revised manuscript.
Secondly, there is the first of the authors’ recommended best practices, in lines 470-473, where they support the use of Agile as a project management framework. Having briefly worked at a company that applied this framework (and much longer as a researcher building my own numerical models), I have some small amount of experience with it, and I must say I do not immediately see its value in a research setting. The highly individualistic nature of scientific research(ers), the very poorly-defined goals, scope, and expected duration of research projects, as well as the high degree of overlap between developers, managers, stakeholders, and users, make the use of Agile difficult in a research context. Additionally, while I see how Agile working can improve the speed with which a large group of people produce output within a certain project, I do not immediately see how Agile affects the quality of that output – which is the subject of this manuscript. I.e., agile scientists might produce science faster, but not necessarily better. If the authors can provide arguments to the contrary, I’d happily read them, but right now nothing of the sort is written in the manuscript – in fact, the concept of Agile is only mentioned briefly at one point in the introduction, and then does not appear again until it is listed as the first “recommended best practice”. I hope the authors can remedy this lack of evidence for their claim of Agile being a “best practice” in the revised version of their manuscript.
Citation: https://doi.org/10.5194/gmd-2024-97-CC1 -
AC3: 'Reply on CC1', Emmanuel Nyenah, 20 Sep 2024
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2024-97/gmd-2024-97-AC3-supplement.pdf
-
AC4: 'Reply on CC1', Emmanuel Nyenah, 20 Sep 2024
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2024-97/gmd-2024-97-AC4-supplement.pdf
-
AC3: 'Reply on CC1', Emmanuel Nyenah, 20 Sep 2024
-
RC1: 'Comment on gmd-2024-97 by Rolf Hut', Rolf Hut, 10 Jul 2024
-
AC1: 'Reply on RC1', Emmanuel Nyenah, 20 Sep 2024
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2024-97/gmd-2024-97-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Emmanuel Nyenah, 20 Sep 2024
-
RC2: 'Comment on gmd-2024-97', Facundo Sapienza, 27 Aug 2024
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2024-97/gmd-2024-97-RC2-supplement.pdf
-
AC2: 'Reply on RC2', Emmanuel Nyenah, 20 Sep 2024
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2024-97/gmd-2024-97-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Emmanuel Nyenah, 20 Sep 2024
Status: closed
-
CC1: 'Comment on gmd-2024-97', Tijn Berends, 14 Jun 2024
Unsollicited review by Tijn Berends
I am glad to see a manuscript like this. With the ever-increasing political and societal demand for new, more accurate scientific knowledge about the Earth system, and particularly its future state, the complexity of computational models has grown exponentially over the past few decades. The need for software engineering skills, on top of the knowledge of the scientific domain, and the wide and varying set of skills required of an active research scientist, is now an undeniable reality. New literature investigating just exactly what “software engineering skills” entails in the context of research software is therefore a valuable addition.Having the luxury of being an uninvited reviewer, I can constrain myself to only pointing out the bits that really strike me, and leave the detailed feedback to the invited reviewers. Two points stand out to me in this manuscript that I think could do with some improvement.
Firstly, there is the concept of “self-explanatory code” mentioned in lines 406-413. While I appreciate that the authors are merely citing another group’s description of that group’s own work, I believe this statement needs a disclaimer. Depending on how you define “self-explanatory”, either all code qualifies as such, or none. If, for example, we define code as “self-explanatory” when one can eventually arrive at an understanding of its functionality without consulting the original author, then all code is self-explanatory – with, of course, the caveat that “eventually” can, in many cases, be prohibitively far into the future. On the other hand, if we define code as “self-explanatory” when we require no other resources to (again, eventually) understand its functionality, then probably no code ever meets this definition, at least in the context of research software, which always requires a substantial level of background knowledge on the part of the developer. E.g., is the code that calculates the sea-level equivalent volume of an ice sheet truly self-explanatory if it does not explain the concept of sea-level equivalent volume? In my view, these considerations illustrate that the phrase “self-explanatory code” is so difficult to define as to be practically meaningless. In my experience, it is used mainly by people who inherited code from their supervisor that is not as well-commented as they’d like it to be, but cannot say so out loud for fear of their career prospects. I’m sure the authors can add these considerations, possibly in a rephrased manner, to their revised manuscript.
Secondly, there is the first of the authors’ recommended best practices, in lines 470-473, where they support the use of Agile as a project management framework. Having briefly worked at a company that applied this framework (and much longer as a researcher building my own numerical models), I have some small amount of experience with it, and I must say I do not immediately see its value in a research setting. The highly individualistic nature of scientific research(ers), the very poorly-defined goals, scope, and expected duration of research projects, as well as the high degree of overlap between developers, managers, stakeholders, and users, make the use of Agile difficult in a research context. Additionally, while I see how Agile working can improve the speed with which a large group of people produce output within a certain project, I do not immediately see how Agile affects the quality of that output – which is the subject of this manuscript. I.e., agile scientists might produce science faster, but not necessarily better. If the authors can provide arguments to the contrary, I’d happily read them, but right now nothing of the sort is written in the manuscript – in fact, the concept of Agile is only mentioned briefly at one point in the introduction, and then does not appear again until it is listed as the first “recommended best practice”. I hope the authors can remedy this lack of evidence for their claim of Agile being a “best practice” in the revised version of their manuscript.
Citation: https://doi.org/10.5194/gmd-2024-97-CC1 -
AC3: 'Reply on CC1', Emmanuel Nyenah, 20 Sep 2024
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2024-97/gmd-2024-97-AC3-supplement.pdf
-
AC4: 'Reply on CC1', Emmanuel Nyenah, 20 Sep 2024
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2024-97/gmd-2024-97-AC4-supplement.pdf
-
AC3: 'Reply on CC1', Emmanuel Nyenah, 20 Sep 2024
-
RC1: 'Comment on gmd-2024-97 by Rolf Hut', Rolf Hut, 10 Jul 2024
-
AC1: 'Reply on RC1', Emmanuel Nyenah, 20 Sep 2024
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2024-97/gmd-2024-97-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Emmanuel Nyenah, 20 Sep 2024
-
RC2: 'Comment on gmd-2024-97', Facundo Sapienza, 27 Aug 2024
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2024-97/gmd-2024-97-RC2-supplement.pdf
-
AC2: 'Reply on RC2', Emmanuel Nyenah, 20 Sep 2024
The comment was uploaded in the form of a supplement: https://gmd.copernicus.org/preprints/gmd-2024-97/gmd-2024-97-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Emmanuel Nyenah, 20 Sep 2024
Data sets
Software sustainability of global impact models (Dataset and analysis script) Emmanuel Nyenah, Petra Döll, Daniel S. Katz, and Robert Reinecke https://doi.org/10.5281/zenodo.11217739
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
555 | 134 | 30 | 719 | 26 | 14 | 12 |
- HTML: 555
- PDF: 134
- XML: 30
- Total: 719
- Supplement: 26
- BibTeX: 14
- EndNote: 12
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1