Distributed computing enables parallel execution of tasks that make up a large computing job. Random fluctuations in service times (inherent to computing environments) often cause a non-negligible number of straggling tasks with long completion time. Redundancy, in the form of task replication and erasure coding, has emerged as a potentially powerful way to curtail the variability in service time, as it provides diversity that allows a job to be completed when only a subset of redundant tasks gets executed. Thus both redundancy and parallelism reduce the execution time, but compete for resources of the system. In situations of constrained resources (here fixed number of parallel servers), increasing redundancy reduces the available level of parallelism. We characterize the diversity vs. parallelism tradeoff for three common models of task size dependent execution times. We find that different models operate optimally at different levels of redundancy, and thus may require very different code rates.