Evaluation and research integrity
On the one hand, it seems obvious that such an incentive scheme, where it is more about the tangible outcomes than the intrinsic interestingness of the research, will lead people to cutting some corners to secure their output. On the other hand, no senior academic or administrator will say that it is any good for your career to do so. As usual, the truth seems to be somewhere in the middle. By far most researchers seem to be able to withstand the seduction of doing fraudulent research, even if the outcomes would seem to boost their career. At the same time, questionable research practices (QRPs) abound. These are practices that are not exactly fraudulent, but they do serve the interest of the researcher more than they do serve science at large (Fanelli, 2009; Steneck, 2006). For example, if one has research outcomes, it serves the researcher (focused on the h-index) to split this into as many papers as reasonably possible, a practice known as salami slicing. It would probably serve science better if it were all condensed into one or a small number of publications, which are then more comprehensive.
Another issue is the discarding of statistical outliers: measuring points that cannot be properly explained. Strictly one is not committing fraud, as one only reports measurements that have really been made. But it obviously does not serve science, as the outlier might be caused by a mechanism that is crucial to the overall explanation. What seems even worse is when entire studies are not published because the outcomes are not interesting. Uninteresting outcomes can still be to the benefit of science: they refute specific expectations, and publishing them prevents others from making the same fruitless attempt at a later moment. However, many journals are not interested in publishing such outcomes, articles with non-outcomes are not very likely to be cited (mind the h-index!), and for the researcher it may feel like a waste of time.