If I said I could tell you if someone was a criminal by the shape of their skull, what would your response be? But I can train an AI to try to match criminal behaviour with skull shape. The AI will not complain or have a moral dilemma about what I am asking it to do. It can not even reason that there is something very wrong with the question. It can produce a nonsense model that fails to accurately predict, but that is not the worse outcome. The worse outcome is that it accurately reveals existing prejudice without knowing that it is doing that (well, without the operator realising that the model is doing that).