From Unofficial BOINC Wiki
The Validation Process is the process of comparing Redundant Results and deciding which is to be considered correct. Because floating-point arithmetic varies between platforms, this decision process is application-specific.
The process is where returned Results are compared with other returned Results for the same Work Unit and they are checked for errors and possibly used to establish the Quorum of Results. Be aware that each Project is going to have its own policy and rules that will dictate the outcome of the Validation Process. In most cases, when the Results are accepted as valid they may also qualify the Participant for the granting of Credit.
It's the actual content of the Result Data Files that needs to be verified, not the amount of Credit given to the computers that processed it. It does this to ensure the science value of the processed Work Unit is valid. If your computer returns a different result than the other two (or more depending on the Quorum Size) computers, it will be flagged as invalid, and your Work Unit will be sent to another Participant's computer. This is one of the new features that makes all of the BOINC Powered Projects superior to SETI@Home Classic.
There are some potential pitfalls in the validation process. As Scott Brown said:
Just to add to the general discussion and to Paul's comments regarding the statistics of the matter (since I work with statistical analysis practically on a daily basis), the credit system appears to be setup such that the third result will always be less likely to get credit.
To illustrate, one host returns a result that is marked as successful. This host is then compared to a subsequent host returning the Result and both are validated. It is this process that bounds the error distribution (the range of 'valid' results). Let us assume that the second host's results were lower (shorter time, etc.) than those of the first host. A third host then returns a successful result which exceeds the upper bound of this error distribution. It is then marked as invalid (this can also work in the other direction). Thus, the zero credit result occurs.
It is entirely possible that this third host would have been validated with the first host had it returned the result before the second host. In other words, since the error distribution used for validation is a function of the first two successful hosts, the likelihood that the third host will be invalid is necessarily increased (note that I have assumed that the first two results were not drastically different to begin with--I would assume that there are fixed limits for the possible range of initial validation).
I am in agreement with Paul that three results should be used in validation. As it stands now, while the likelihood that two validated Hosts are in error is less than the third Host being in error, the chance that the third host is actually the valid result is high enough to make me uncomfortable (and I think others, too).
Take a look at A Simple Example of the Validation Process.