Ethical Computing Research
Understanding the limitations and strengths of computation, learning and simulation is paramount to develop a critical eye in tomorrow’s works force.
Sources of bias especially in machine learning applications should always be considered. A fascinating read on such topics:
[1]Perdue, G. N., Ghosh, A., Wospakrik, M., Akbar, F., Andrade, D. A., Ascencio, M., ... & Young, S. (2018). Reducing model bias in a deep learning classifier using domain adversarial neural networks in the MINERvA experiment. Journal of Instrumentation, 13(11), P11020.
-
Inaccuracies arise from round off errors, data type conversion and many other sources introduced in the large scale multi computational platform models of today.
-
Data provided to a machine learning model may not cover the full extent of the problem, this is especially important for BVP, and problems with long range order.
-
Lack of sufficient random variables to produce higher quality pseudorandom number generation. Poor random number generation can lead to un physical results in data. [Dongjie Zhu et al J. Stat. Mech. (2023) 073203]
-
Training data maters, order matters. By choosing data at random can lead to bias in various sections of a models output.
-
Stored memory in a neural network may lead to poor performance or hallucinations, and needs further exploration.
-
Different computers have limited addressing and in todays interconnected cloud, fog and IOT based compute platforms open the question of how well API software maintains floating point accuracy
-
*Work in progress *
-
work in progress