Since you are immediately talking about consistency and completeness, let's start from the computation theory, rather than the ethics or the presentational logic.

Those systems that have well-defined axioms and permit integer arithmetic are notoriously not complete via some version of Goedel's reasoning. It does not matter what the corresponding semantics would be. So if you have this aim at all, you would need to pursue a very odd sort of reasoning, one in which counting things can never affect the ethical outcome.

But then can any such system be precise enough that anyone would consider it 'complete'? Huge chunks of finance would need to be immediately excluded. Most of us think that the moral value of gain and risk really do depend rather finely upon arithmetical details.

Kantianism (as it gets naively employed, not necessarily as rigidly imagined by Kant himself) tends to rule out arithmetic or the complexity that would require it as an intuitive part of the definition of a maxim. But then, by focussing on autonomy, it often gives the answer 'that depends upon an arbitrary negotiation between those involved.' Does that count as being complete?

One basic problem here is that moral (and more concretely, legal) systems are not by nature consistent, they are generally overdetermined. So there are numerous equally good right answers, none of them perfect, but then arbitrary combinations of those multiple right answers are not subject to consistent reasoning without paradox.

This suggests that any real ethical system is not axiomatic, but algorithmic, and consists of negotiation processes that govern the acceptable exchange of power and seek a consistent balance. (The point I generally come back to, the central social process is a language-game.) But, by some version of Turing's thesis, powerful enough algorithmic systems almost always permit questions that have the halting problem and do not converge. Instead, the world is full of 'Julia sets' -- algorithms get hung up in infinite regress near fractal boundaries between basins of attraction. So those are not going to be complete, either.

The trend in something like a predictive consequentialism is to assume the best you could hope for if you want something complete is a system that involves ad-hoc compromises, so it is inconsistent by design, but trends toward consistency over time. "Rule utilitarianism" with some kind of cutoff on complexity would be an example -- it seems to be what judges would like to imagine they do.

The first step is to found the relevant axiom systems ? Do you know some ? – Mauro ALLEGRANZA – 2019-06-21T13:59:33.943

@Mauro, I do not have any one such system in mind. My question is regarding a proof that can conclusively show either that all moral systems are not - both complete and consistent or that there does exist moral systems that do satisfy those parameters. – hashes – 2019-06-21T14:17:56.087

But you can ask for consistency

onlyif you have a "formalized" theory available : underlying logic (rules of inference) and specific "moral theory" axioms. – Mauro ALLEGRANZA – 2019-06-21T14:26:08.663Thanks @Mauro, I'll go into this. – hashes – 2019-06-21T14:29:22.767

To add to the point: While there may be complete and consistent formalisations of ethical systems, you either end up with something purely formal or you'd need a formalisation of all morally relevant situations, which amounts to a complete and consistent formalisation of natural language. This has proven to be futile in the first decades of the nineteenth century. Practical relations to the world are not formal. – Philip Klöcking – 2019-06-21T14:33:10.967

Moral reasoning is neither deductive nor axiomatic, it employs analogical judgments over specific cases based on vague principles, rather than inferential derivations from general laws. So the general idea does not really make sense. There is something called formal ethics but it only covers form of ethical principles rather than their content, so it is (very) incomplete by design. See also Are analogies between ethics and mathematics philosophically coherent?

– Conifold – 2019-06-21T15:56:15.530This reminds me of the Thelimia and Alister Crowley put simply, "do what one wills as long as it don't stops any others will" – scott – 2019-06-22T07:38:35.933

@scott Rousseau and Kant put it in a similar fashion way earlier: The freedom of every individual should have it's limit in the freedom of others. – Philip Klöcking – 2019-06-22T09:04:21.737