Most research teams produce a considerable amount of research prototypes and proof-of-concept tools. Quality of these tools is usually low (they may lack documentation, tests, don’t scale,…) and most are abandoned after the paper for which the tool was created in the first place gets published.
My point is not to criticize this situation. We all would love to be able to industrialize our tools and make sure they have an impact on society, but the current funding and evaluation system does not really help here. Depending on the stage of your career, you’ll find very difficult to justify the time investment required to improve your tools (versus, for instance, publishing one more paper).
What I do ask you to do, instead, is to be honest and realistic about the effort you’re putting into the active development and maintenance of each tool. This will help avoid misunderstandings with potential contributors (that may expect from you a reaction time and effort you may not be willing to give).
At SOM, we have done the exercise of classifying each of our tools into four maintenance categories (taken from here):
Experimental = Brand new! Anything goes! | Active = Ongoing work! Upcoming features! |
Stable = Nothing new on our roadmap | Archived = Fork it if you like it! |
I think this is a valuable exercise every research team should attempt. Not only for the reasons given above but also because it helps to force the team to go through the tools and wonder what is the best path forward for each of them.