by jordicabot | Dec 28, 2022 | evaluating research, Research Rants, Uncategorized
More and more, all evaluation agencies and the research community as a whole is evolving to reward quality over quantity. All recent research assessment manifests (e.g DORA) clearly push for a qualitative evaluation where plain numbers (total amount of publications,...
by jordicabot | Nov 2, 2020 | publishing, evaluating research, Research Rants
This year, we got two best demo awards: Gadolinium: Monitoring Non-Functional Properties of REST APIs won the best poster/demo award at ICWE 2020 PapyGame: Let’s Play Modeling won the best demo award at Models 2020 And I’m proud of it. You may think that...
by jordicabot | Aug 17, 2020 | doing research, evaluating research, philosophy, Research Rants
Last month, I was discussing with Feli, from the budgeting team at my university, to upgrade the plan I had with WPEngine as we were paying overage charges due to excess visits. The plan was capped 100.000 thousand visits per month. And many months we were going over...
by jordicabot | Mar 21, 2017 | publishing, evaluating research, Research Rants
I have the feeling that more and more people cite workshop papers to sustain their claims. As if workshop papers were peer-reviewed. They are not. At most, they are “peer-filtered” (meaning that the PC of the workshop checked the work to make sure authors...
by jordicabot | Aug 26, 2015 | evaluating research, Research Rants
First of all , the “I” in the post title is not me, it’s Richard Paige, Professor of Enterprise Systems and (as of May 2015) Deputy Head of Department (Research) in the Department of Computer Science at the University of York. And besides all this,...
by jordicabot | Aug 6, 2015 | evaluating research, Research Rants
Read about the new features of metaScience, an online service we released a couple of months ago to give some insights on conferences (see the original announcement here). We have now released the new version of our service, which features new metrics for conferences...