To assess the accuracy of a kitchen scale a standard weight known to weigh 1 gram is weighed a total of n times and the mean, , of the weighings is computed.

Suppose the scale readings are Normally distributed with unknown mean, µ, and standard deviation = 0.01 g.

How large should n be so that a 90% confidence interval for µ has a margin of error of ± 0.0001?