This article describes the goals and objectives behind a community effort to build a testing and certification system for "HPC" skills. As the profession of Research Facilitation grows there is a clear benefit to having a common testing and certification framework to help define the technical skills needed in the profession. The article motivates the need for this system and encourages further reading in two referenced articles. Some noticeable aspects of the approach laid out in the article include:
- This is a volunteer and community driven effort.
- The authors of the article come from academia and not commercial training which lends credibility to the approach.
- This is a global effort with contributors in multiple countries.
- The testing/certification is hierarchical and can be viewed as a smorgasbord of topics that can be tailored to different situations and goals (Instead of dictating a one-size-fits-all).
Although a good overview, the lack of details in the article makes it difficult for the reader to know exactly what is intended. For example, when reading the article I found myself asking these general question:
- Although a few applications were mentioned, the primary audience for the testing/certification system is unclear. Is it research facilitators? HPC Users? System administrators? Software Developers? It may be "all-of-the-above" and the hierarchical nature of the testing is intended to cover everything. However, I have to believe this would make the material unfocused.
- What qualifies as HPC is still an open and often debated question. How are decisions made as to what should be included and what should be excluded? How are priorities set? For example, I did a quick review of the website and noticed there was no reference to High Throughput Computing (HTC), is this intentional?
- As a volunteer organization must be difficult to make final decisions. How is the group organized? There is a governance tab on the website with a mention of "voting rights' but it is unclear how decisions are made. Is there a benevolent dictator? A type of peer review process? Everything is up to popular vote?
- What happens when I want my highly specialized question or topic added to the hierarchy? What is being done to prevent chaotic growth of the materials? What is being done to avoid bias?
- This is not a one and done project. I am concerned about the longevity of this approach. This seems to be a labor of love but I don't see any indication of a long term strategy. As a volunteer organization only what efforts are being made to connect with other organizations (ACM, XSEDE, PRACE) to help ensure continuity and longevity.
As this article is an introduction to the project and lacks a lot of details, leaving the reader with a lot of questions which is both bad and good. I think part of the goal of the article is to peek the readers interest and provide just enough information to consider going out and learning more. However, there also such a lack of details that leaves more questions than answers.
The following are some suggestions to improve the article:
- Change the subtitle from "Community-Lead" to "Community-Led"
- Describe the current state of the project and give a timeline. How far along is the project? For example, maybe state when version 1.0 of the tests be available (or some similar metric). This information unclear from the article and very unclear from the website.
- A clear definition of what it means to be an HPC "practitioner". Even if the definition is intended to be open, this needs to be clear to the reader.
- More examples would be extremely helpful. A few sections of the hierarchy were explained but I would like to have seen some example questions.
- Is there any attempt to define/use a pedagogical framework for building or evaluating the individual assessments? Quite a bit of research has gone into how to build effective evaluations. It feels like this project is lacking