OK, the salary example was a bit simplified; in my case it was about giving financial help to someone. That help is based on a monthly allowance and then split in the number of allocated days in the month, that's for the 16/31.
Now for your example, I see that float and decimal just give the same result. Provided I'm doing financial computations of a final number, I'm ok with 2 decimals. And both your computations work fine.
Th decimal module in python gives you number of significant digits, not number of decimals. You'll end up using .quantize() to get to two decimals which is rounding (so, no advantage over floats).
As I said, as soon as you have division/multiplication you'll have to take care of rounding manually. But for addition/subtraction, then decimal doesn't need rounding (which is better).
The fact is that everybody say "floats are bad" because rounding is tricky. But rounding is always possible. And my point is that rounding is tricky even with the decimal module.
And about bragging, I can tell you one more thing : rounding errors were absolutely not the worse of our problems. The worse problem is to be able to explain to the accountant that your computation is right. That's the hard part 'cos some computations imply hundreds of business decisions. When you end up on a rounding error, you're actually happy 'cos it's easy to understand, explain and fix. And don't start me on how laws (yes, the texts) sometimes explain how rounding rules should work.
sum = 0
for i in range(0, 10000000):
sum += 0.1
print(round(sum*1000, 2))
what should this code print? what does it print?
I mean, sure, this is a contrived example. But can you guarantee that your code doesn't do anything similarly bad? Maybe the chance is tiny, but still: wouldn't you like to know for sure?
We agree, on additions, floats are tricky. But still, on division, multiplications, they're not any worse. Dividing something by 3 will end up in an infinite number of decimals that you'll have to round at some point (except if we use what you proposed : fractions; in that case that's a completely different story).
Now for your example, I see that float and decimal just give the same result. Provided I'm doing financial computations of a final number, I'm ok with 2 decimals. And both your computations work fine.
Th decimal module in python gives you number of significant digits, not number of decimals. You'll end up using .quantize() to get to two decimals which is rounding (so, no advantage over floats).
As I said, as soon as you have division/multiplication you'll have to take care of rounding manually. But for addition/subtraction, then decimal doesn't need rounding (which is better).
The fact is that everybody say "floats are bad" because rounding is tricky. But rounding is always possible. And my point is that rounding is tricky even with the decimal module.
And about bragging, I can tell you one more thing : rounding errors were absolutely not the worse of our problems. The worse problem is to be able to explain to the accountant that your computation is right. That's the hard part 'cos some computations imply hundreds of business decisions. When you end up on a rounding error, you're actually happy 'cos it's easy to understand, explain and fix. And don't start me on how laws (yes, the texts) sometimes explain how rounding rules should work.