(before you think this can’t happen please read this story of my former colleague Steve Easterbrook)
Everybody can make mistakes and this includes messing up the reviews and uploading the wrong review to the submission system. In some conferences, you are assigned 10 or more papers to review so it’s not unlikely that this happens.
In fact, the real problem is nobody noticing this mistake, not even the editor/PCChair/whoever in charge of collecting the reviews, analyzing them and make a final decision. As we already discussed here, their task is not simply calculating the average mark for the papers and ordering them based on that (if that was the case, a simple algorithm could do a much better job) and, clearly, this is what happens in some occasions.
To fix this, Steve proposes that each review starts with a summary paragraph of what the paper is about so that the editor can quickly check if the review belongs to the right paper. I’m wondering if we could just get rid of the summary marks for the paper. This makes too easy the job of the editors/chairs and allow them to skip reading the full reviews. Probably not feasible for conferences, given the huge load of papers to review in a short timespan but it could be helpful for journal and research proposal evaluations.
I thin the huge load is the problem. Look at CAISE: I think I calculated that the program committee each had to review something like 13 papers in 2 months including Christmas. So on top of their own research, monitoring students, family time (hah!), and teaching, you will spend how much time per paper? Not enough, I bet.
The bigger question, in my mind, is why there are so few retractions (if any) in computer science. No one is making up numbers? But if they were, would we catch them?