You need to have a control in your experiment so that you can be certain (to a certain degree) that the results are due to the variables you are testing. A positive control is when you test your experiment against something where you know what the effects will be, whereas a negative control would be when you test the experiment with something you know will have no effect
I found this for you and I copied, just in case what I explained wasn´t clear:
(a) A positive control is a level of treatment that is expected to result in a change in the value of a dependent variable. The purpose of the positive control is to serve as proof that the experiment can produce a positive result, i.e., a change in the value of a dependent variable.
(b) If a positive control is not included in a protocol then non-change in the value of a dependent variable may be due to:
(i) a negative result,
(ii) a protocol that is not capable of producing a positive result (a.k.a., systematic error), or
(iii) experimental error in the course of performing the protocol
(c) Of course, the latter could occur for individual measurements rather than for the entire experiment, but that is why one does more than one replication.
(a) The negative control level of treatment often corresponds with the control-treatment level of treatment.
(b) The negative control is supposed to result in a lack of change (or some baseline display value) in the dependent variable. This way one may determine whether experimental levels of treatment produce a change in the dependent variable.
(c) The negative control also serves as proof that a given protocol is capable of giving baseline results.
(i) That is, if a given level of treatment produces a given value for the dependent variable, we will only know if that value is a consequence of varying the independent variable if we have something with which to compare that value of the dependent variable.
Others have answered this before, check it out