Policymakers manipulate the real world to match formulae”. He further noted that the ability of risk assessment to provide clear guidance has been “overstated by risk assessors and overused by regulators and lawmakers”. We can infer from this that the latitude of uncertainty brought about by myriads of factors lend a propensity to appropriate the measure in accordance with one’s interest or advocacy.
However, we do not preclude the validity of the concerns of the citizens. As noted by Contini, et al (1991), a risk analysis on the accidental release of ammonia conducted by teams of scientist from eleven European countries resulted in eleven risk estimates whose numerical results were dictated or dependent on many assumptions introduced during every step of the risk analysis.
In presentation as to the uses, limitation and abuses of risk assessment, risk assessments are being used as tool/proof to advance technologies as hard science using unrealistic assumptions which are kept hidden and not stated openly (Howard). This gives credence to the citizen’s concern that there seem to be blind adherence or faith in assessments masked as hard science but oftentimes are based on unrealistic assumptions. So their clamor for validly tested models in assessing population exposure seems justifiable on this account.
However, requiring that exposure assessments be based only on validated models do not in itself guarantee the integrity of an exposure assessment. In a study on Validation Strategy for the Integrated Exposure Uptake Biokenitic (IEUBK) Model for Lead in Children, the model has been admitted to be inadequate in prediction simply because of the multiplicity of specific variability and hence the model can be applied only to individual data sets due to “many community-specific characteristics which may be difficult to quantify” (Environment Protection Agency [EPA], 1994).
However, as applied to environmental