The statistical nature of the measurement

We defined the measurement as a process of separate a single value or a subset from a spectrum, which is the set of all possible measurements. It is a philosophical matter which of the values is the "true" measure of the observable, or even if we can speak of a "true" measure at all. It is possible to adopt the platonic point of view in classical mechanics, which is of the existence of an actual value for the observable. In this case, repeated experiments have the purpose of bringing us closer to the "true" measure, since a single observation always has an error to be considered.
However, this approach to the measurement process finds serious difficulties in quantum physics, where the result of an experiment highly depends on the observable. This is the origin of the very well known particle-wave duality observed in double split experiments \cite{experiment,interference,studios}. Therefore, a general measure theory cannot rely on the assumption of the existence of a "true" measure, and we wish to be as close as possible of a general concept of measurement.
In this case, we assume that the result of a measurement of an observable is aleatory within the range of the spectrum. This means that a single measurement can select any value of the spectrum, and there is no way to predict the result. There is no "true" measure. The best we can do is to perform several measurements in sequence, always taking care, so the physical system is initially set to the same initial physical state.
Therefore, it is possible to perform a single measurement, but we should make a greater number of measurements before establish a single value for the measure. This single value is assumed to be the mean value, and an error is assigned to the mean signed deviation, for example. In this case, we should embrace the statistical nature of the measurement since there is no way to say that a single measurement can represent a "true" measure.