I set up an internal standard method on our GC. Five calibration solutions were made (just the reference and the standard) and 1.0 uL of each was injected. The 5 data points gave a straight line (wt ref/wt std vs area ref/area std) with R-squared >0.99. The concentration range of the reference (the compound I have to analyze) went from 9% to 67% by weight. The actual unknown samples contain solvent, and the unknowns will have about 25% by weight of the reference, so I am inside the concentration range of the calibration set. I GC’d two different volumes of one sample (0.2 uL and 1.0 uL) and got very different results. The 0.2 uL injection put a weight of the reference on the column lower than that put on the column by the most dilute of the calibration samples, while the1.0 uL injection put a weight of the reference on the column that was close to one of the calibration solutions. Does the weight of reference going on the column matter as long as the concentration of the reference is in the calibration range?